OPENAICHAT

Overview

The OPENAICHAT workflow application lets you interact with an OpenAI chat model.

How it works

  • The application allows you to interact with OpenAI completion models.

  • Application logs are available. These can be specified by setting the value of the OpenAIChatLogLevel parameter in the web.config file to 0 to deactivate logs, 1 for error logs, 2 for information logs, or 3 for debug logs; the default value is 0.

Required parameters

Parameter
Type
Direction
Description

MODEL

TEXT

IN

ID of the model to use You can find available models at the following link: https://platform.openai.com/docs/models/model-endpoint-compatibility; the endpoint used by default is /v1/chat/completions.

You can use either of the following configurations: with system/user messages, with a message number, or with a JSON message array.

With system/user messages

Parameter
Type
Direction
Description

SYSTEM_MESSAGE

TEXT

IN

The system message content

USER_MESSAGE

TEXT

IN

The user message content

With a message number

Parameter
Type
Direction
Description

MESSAGE_ROLEx

TEXT

IN

The type of the message, where x corresponds to the message number; the value should be assistant, system, or user

MESSAGE_CONTENTx

TEXT

IN

The user message content, where x corresponds to the message number

With a JSON message array

Parameter
Type
Direction
Description

MESSAGE_JSON

TEXT

IN

The JSON array message object; the structure should match the following:

Optional parameters

Parameters
Type
Direction
Description

API_KEY

TEXT

IN

OpenAI API key By default, this value comes from the OpenAIApiKey parameter in the web.config file.

URL

TEXT

IN

API endpoint; defaults to https://api.openai.com/v1/audio/transcriptions

TEMPERATURE

NUMERIC

IN

Sampling temperature, between 0 and 1; defaults to 1

Higher values (e.g. 0.8) will make the output more random, while lower values (e.g. 0.2) will make it more focused and deterministic.

TOP_P

NUMERIC

IN

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. Therefore, 0.1 means only the tokens comprising the top 10% probability mass are considered. Defaults to 1

FREQUENCY_PENALTY

NUMERIC

IN

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Defaults to 0

MAX_TOKENS

NUMERIC

IN

Maximum number of tokens that can be generated in the chat completion; defaults to 256

PRESENCE_PENALTY

NUMERIC

IN

Number between -2.0 and 2.0; defaults to 0

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

RESPONSE_FORMAT

TEXT

IN

Format of the response: json_object, text, or json_schema

When the value is json_object, the system prompt should contain the JSON keyword. When the value is json_schema, the expected schema must be provided in the RESPONSE_FORMAT_JSON_SCHEMA parameter.

RESPONSE_FORMAT_JSON_SCHEMA

TEXT

IN

The JSON schema that will be used by the model to respond. See the RESPONSE_FORMAT_JSON_SCHEMA section below for an example.

APP_RESPONSE_IGNORE_ERROR

TEXT

IN

Specifies (Y or N) if error should be ignored; defaults to N

In case of error, if the parameter has Y as its value, the error will be ignored and defined OUT parameters (APP_RESPONSE_STATUS or APP_RESPONSE_CONTENT) will be mapped. Otherwise, an exception will be thrown.

TOOLS

TEXT

IN

List of tools available to the model, formatted in JSON and compliant with OpenAI's format: https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools. See the TOOLS section below for an example.

TOOL_CHOICE_REQUIRED

TEXT

IN

Specifies whether the model must necessarily choose a tool. Values: Y or N (default).

PARALLEL_TOOL

TEXT

IN

Specifies whether the model can choose multiple tools. Values: Y (default) or N.

MESSAGE_HISTORY

TEXT

INOUT

The message history in JSON format. The reference structure follows OpenAI's documentation for the messages object: https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages.

SELECTED_TOOL

TEXT

OUT

The list of selected tool names, separated by commas

SELECTED_TOOL_PARAM

TEXT

OUT

A JSON array representing the list of selected tools along with their parameters. See the SELECTED_TOOLS_PARAMS section below for an example output.

SELECTED_TOOLS_COUNT

TEXT

OUT

The number of selected tools

RESULT

TEXT

OUT

Chat result call

RESULT_CONTENT

TEXT

OUT

Content of the assistant message

RESULT_TOTAL_TOKENS

NUMERIC

OUT

Total of tokens used for generation

RESULT_COMPLETION_TOKENS

NUMERIC

OUT

Total of tokens used for generation

RESULT_PROMPT_TOKENS

NUMERIC

OUT

Total of token used for the prompt

APP_RESPONSE_STATUS

TEXT

OUT

Response status code

APP_RESPONSE_CONTENT

TEXT

OUT

Response payload or error message

JSON schema use case

Using a JSON schema as a response format enforces the application to respond in a structured manner that aligns with the schema.

You can directly extract the returned values to populate specific data; simply specify the name of the property to extract as the parameter name and set the target data in OUT.

Examples

TOOLS

SELECTED_TOOLS_PARAMS

RESPONSE_FORMAT_JSON_SCHEMA

Last updated