Documentation

ai.engine.register

This method adds a custom service. It also registers an engine and updates it upon repeated call. This is not the embedding location because partner's endpoint shall follow strict formats.

Parameters

Parameter Description Available from version
name Brief and concise title that will appear in the user UI.
code Unique engine code
category Can be either text (text generation) or image (image generating) or audio (optical character recognition)
completions_url endpoint for processing user query.
settings AI type (see description below). Optional. 23.800

Method returns ID for added engine on success.

AI type

Array with parameters:

Parameter Description Available from version
code_alias AI type. Permissible values:
  • ChatGPT (Open AI)
  • YandexGPT (Yandex)
(ChatGPT by default).

Each type has its own preprompts that must not be mixed. When registering, the provider must declare which type of artificial neural network is suitable best: specific prepromts for this type must be sent to provider.

Currently, only prepromts functionality is available. AI type applicability may be expanded in the future.

model_context_type Calculated type of context (see. description for context below). The following values are available:
  • token - tokens Within AI neural networks "tokens" usually are related to minimal units of parsed input text or a sequence of characters before submitting model for processing. Token can indicate a single letter, character, word, event, or a complete phrase depending on the model's type and objective .

  • symbol - characters. Standard text length.
Default value - token.
model_context_limit Context volume (16К by default). Context limit is verified according to calculation type before sending a user query to you.

Example

BX24.callMethod(
	'ai.engine.register',
	{
		name: 'Johnson GPT',
		code: 'johnson_gpt',
		category: 'text',
		completions_url: 'https://antonds.com/ai/aul/completions/',
		settings: {
			code_alias: 'ChatGPT',
			model_context_type: 'token',
			model_context_limit: 16*1024,
		},
	},
	function(result)
	{
		if(result.error())
		{
			console.error(result.error());
		}
		else
		{
			console.info(result.data());
		}
	}
);

Endpoint

Attention! Script shows everything in a single code flow, as for example. You must move code strings 30-54 in a separate portion in the production mode.

Template for creating a custom endpoint can be used for customizing your own service.

Please note, script shall complete the following:

  1. receive query, quickly process it, receive and add it to its internal queue.
  2. be capable to return various response statuses (available in the example):
    • 200 — simple link jump;
    • 202 — if you have received the query and added to it queue;
    • 503 — service unavailable.

What to do, if all is ready - see the template.

Response standby takes specific amount of time (also specified in script), then callback becomes invalid.

Attention! In addition to response code, on success handler must return json_encode(['result' => 'OK']).

If the provider category is audio, the prompt key returns the array:

  • file - link to file (be advised that it may not contain extension!),
  • fields - auxiliary internal array, consisting of the following:
    • type – file's content-type, just in the very case if it doesn't have an extension (for example, "audio/ogg"),
    • prompt – auxiliary prompt for audio file (may contain principal information for successful file recognition: for example, your company name).

The provider also receives additional fields:

Fields Description Starting from version
auth Authentication date, 23.600.0
payload_raw Raw prompt value (when using Copilot contains character code for employed prompt), 23.600.0
payload_provider Character code for preprompt provider (when using Copilot contains a prompt. 23.600.0
payload_prompt_text In case of payload_provider = prompt, contains raw preprompt specification. This is unprocessed preprompt for individual analysis. You can learn more in prompts documentation. 23.800.0
payload_markers Array with additional markers from user (original_message, user_message, language), used for generating a prompt. You can learn more in prompts documentation. 23.800.0
payload_role Role (specification), used for generating a prompt. The GPT-similar systems must send this role as system inside an array with messages. 23.800.0
context. Array with previous message in chronological order. For example, list of comments to a post. Author's message is considered as first in such content list (the post itself).

Important:

  • Volume of context, sent to your provider depends on the provider volume specified by you or a calculation type (you can find more details in provider-related documentation). By default, calculation method is "tokens", volume is 16K.
  • You must send context directly to AI network only if the parameter collect_context is passed as true (1). In the remaining cases, its passed as additional data as per your purview.

23.800.0
max_tokens Maximum number of tokens. This parameter controls the result's length. Optional.
temperature Temperature. This parameter controls the randomness (low values make the output more focused and determinated). Required.

Example

For example, you receive (in addition to other data) three data arrays.

  • prompt - contains current query, it's a simple text;
  • payload_role - some text containing a specification;
  • context - array (for example, also not empty).

In this case, resulting array is as follows:

[
	[
		'role' => 'system',
		'content' => $payload_role,
	],
	[
		// the complete context array, or its portion, if you want to save the query
		// but remember, it follows in a chronological order (the most recent messages are listed from below to top)
	],
	[
		'role' => 'user',
		'content' => $prompt,// its a current query and its NOT included into context
	]
];

© «Bitrix24», 2001-2024
Up