OPENAICHAT

Overview

The OPENAICHAT workflow application lets you interact with AI chat models from multiple providers, including OpenAI, Mistral AI, Google Gemini, and Anthropic Claude.

How it works

  • The application allows you to interact with completion models.

  • Application logs are available. These can be specified by setting the value of the OpenAIChatLogLevel parameter in the web.config file to 0 to deactivate logs, 1 for error logs, 2 for information logs, or 3 for debug logs; the default value is 0.

Supported providers

Provider
API format
Default URL

OpenAI

Native

https://api.openai.com/v1/chat/completions

Mistral AI

OpenAI-compatible

https://api.mistral.ai/v1/chat/completions

Google Gemini

OpenAI-compatible

https://generativelanguage.googleapis.com/v1beta/openai/chat/completions

Anthropic Claude

Native

https://api.anthropic.com/v1/messages

The provider is automatically detected from the API URL, but it can be explicitly set using the PROVIDER parameter (see the Optional parameters table below).

OpenAI-compatible providers

The OPENAICHAT application automatically detects and supports OpenAI-compatible providers based on the API URL. For providers that use the OpenAI-compatible API format (Mistral AI, Google Gemini), you can set the URL parameter (or the provider's API URL setting in the web.config file) to the provider's endpoint.

circle-exclamation

Provider API configuration settings

Setting
Description

OpenAIApiKey

OpenAI API key

OpenAIChatApiUrl

OpenAI API endpoint

MistralApiKey

Mistral AI API key (required for Mistral)

MistralChatApiUrl

Mistral AI API endpoint Default: https://api.mistral.ai/v1/chat/completions

GeminiApiKey

Google Gemini API key (required for Gemini)

GeminiChatApiUrl

Google Gemini API endpoint

Default: https://generativelanguage.googleapis.com/v1beta/openai/chat/completions Note: This is a beta endpoint that may change when GA.

AnthropicApiKey

Anthropic API key (required for Anthropic)

AnthropicChatApiUrl

Anthropic API endpoint Default: https://api.anthropic.com/v1/messages

circle-exclamation

Provider auto-detection

URL pattern
Detected provider

Contains api.anthropic.com

Anthropic

Contains generativelanguage.googleapis.com

Gemini

Contains api.mistral.ai

Mistral

Default

OpenAI

Compatible providers

Provider
Type

Azure OpenAI

Cloud

Ollama

Self-hosted

vLLM

Self-hosted

LocalAI

Self-hosted

LM Studio

Self-hosted

Together AI

Cloud

Groq

Cloud

Mistral AI

Cloud

OpenRouter

Cloud

circle-exclamation

Required parameters

Parameter
Type
Direction
Description

MODEL

TEXT

IN

ID of the model to use You can find available models at the following link: https://platform.openai.com/docs/models/arrow-up-right; the endpoint used by default is /v1/chat/completions.

You can use either of the following configurations: with system/user messages, with a message number, or with a JSON message array.

With system/user messages

Parameter
Type
Direction
Description

SYSTEM_MESSAGE

TEXT

IN

The system message content

USER_MESSAGE

TEXT

IN

The user message content

With a message number

Parameter
Type
Direction
Description

MESSAGE_ROLEx

TEXT

IN

The type of the message, where x corresponds to the message number; the value should be assistant, system, or user

MESSAGE_CONTENTx

TEXT

IN

The user message content, where x corresponds to the message number

With a JSON message array

Parameter
Type
Direction
Description

MESSAGE_JSON

TEXT

IN

The JSON array message object; the structure should match the following:

Optional parameters

Parameters
Type
Direction
Description

PROVIDER

TEXT

IN

Force provider selection; if not set, the provider is auto-detected from the URL Possible values: openai, mistral, gemini, or anthropic

API_KEY

TEXT

IN

API key By default, this value comes from the OpenAIApiKey parameter in the web.config file.

URL

TEXT

IN

API endpoint; this value comes from the OpenAIChatApiUrl parameter in the web.config file, if it's been defined

TEMPERATURE

NUMERIC

IN

Sampling temperature, between 0 and 1

Default: 1

Higher values (e.g. 0.8) will make the output more random, while lower values (e.g. 0.2) will make it more focused and deterministic.

TOP_P

NUMERIC

IN

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass

Default: 1

📌 Example: A value of 0.1 means only the tokens comprising the top 10% probability mass are considered.

FREQUENCY_PENALTY

NUMERIC

IN

Number between -2.0 and 2.0

Default: 0

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

MAX_TOKENS

NUMERIC

IN

Maximum number of tokens that can be generated in the chat completion

Default: 4096 for Anthropic only; undefined for other providers

✏️ Note: For reasoning models (o1, o3, GPT-5 series), use MAX_COMPLETION_TOKENS instead. If both MAX_TOKENS and MAX_COMPLETION_TOKENS are specified, MAX_COMPLETION_TOKENS takes precedence.

MAX_COMPLETION_TOKENS

NUMERIC

IN

Maximum number of tokens to generate in the completion

This parameter is required for reasoning models (o1, o3, GPT-5 series) and takes precedence over MAX_TOKENS when both are specified. For reasoning models, this value includes both reasoning tokens and visible completion tokens.

REASONING_EFFORT

TEXT

IN

Controls the effort level for reasoning models; only applicable to reasoning models (o1, o3, GPT-5 series)

Values: low, medium, or high

⚠️ Important: Do not use this parameter with non-reasoning models (e.g. GPT-4o, GPT-4-turbo) as this will cause an API error.

PRESENCE_PENALTY

NUMERIC

IN

Number between -2.0 and 2.0

Default: 0

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

RESPONSE_FORMAT

TEXT

IN

Format of the response: json_object, text, or json_schema

When the value is json_object, the system prompt should contain the JSON keyword. When the value is json_schema, the expected schema must be provided in the RESPONSE_FORMAT_JSON_SCHEMA parameter.

RESPONSE_FORMAT_JSON_SCHEMA

TEXT

IN

The JSON schema that will be used by the model to respond.

See the RESPONSE_FORMAT_JSON_SCHEMA section below for an example.

APP_RESPONSE_IGNORE_ERROR

TEXT

IN

Specifies (Y or N) if error should be ignored

Default: N

✏️ Note: In case of error, if the parameter has Y as its value, the error will be ignored and defined OUT parameters (APP_RESPONSE_STATUS or APP_RESPONSE_CONTENT) will be mapped. Otherwise, an exception will be thrown.

TOOLS

TEXT

IN

List of tools available to the model, formatted in JSON and compliant with OpenAI's format: https://platform.openai.com/docs/api-reference/chat/create#chat-create-toolsarrow-up-right; see the TOOLS section below for an example

TOOL_CHOICE_REQUIRED

TEXT

IN

Specifies whether the model must necessarily choose a tool

Values: Y or N (default)

PARALLEL_TOOL

TEXT

IN

Specifies whether the model can choose multiple tools

Values: Y (default) or N

MESSAGE_HISTORY

TEXT

INOUT

The message history in JSON format

The reference structure follows OpenAI's documentation for the messages object: https://platform.openai.com/docs/api-reference/chat/create#chat-create-messagesarrow-up-right.

SELECTED_TOOLS

TEXT

OUT

The list of selected tool names, separated by commas

SELECTED_TOOLS_PARAM

TEXT

OUT

A JSON array representing the list of selected tools along with their parameters; see the SELECTED_TOOLS_PARAMS section below for an example output

SELECTED_TOOLS_COUNT

NUMERIC

OUT

The number of selected tools

RESULT

TEXT

OUT

Chat result call

RESULT_CONTENT

TEXT

OUT

Content of the assistant message

RESULT_TOTAL_TOKENS

NUMERIC

OUT

Total of tokens used for generation

RESULT_COMPLETION_TOKENS

NUMERIC

OUT

Total of tokens used for generation

RESULT_PROMPT_TOKENS

NUMERIC

OUT

Total of token used for the prompt

RESULT_REASONING_TOKENS

NUMERIC

OUT

Number of tokens used for internal reasoning by reasoning models (o1, o3, GPT-5 series)

These tokens are not visible in the response but count toward usage and billing. Returns 0 for non-reasoning models.

RESULT_CACHED_TOKENS

NUMERIC

OUT

Number of prompt tokens that were served from cache

Cached tokens are billed at a reduced rate. This is useful for understanding cost optimization from prompt caching.

APP_RESPONSE_STATUS

TEXT

OUT

Response status code

APP_RESPONSE_CONTENT

TEXT

OUT

Response payload or error message

Provider-specific considerations

Parameter availability by provider

Parameter
OpenAI
Mistral
Gemini
Anthropic

MODEL

Yes

Yes

Yes

Yes

SYSTEM_MESSAGE

Yes

Yes

Yes

Yes

USER_MESSAGE

Yes

Yes

Yes

Yes

TEMPERATURE

0-2

0-2

0-1

0-1

TOP_P

Yes

Yes

Yes

No

MAX_TOKENS

Yes

Yes

Yes

Required

TOOLS

Yes

Yes

Yes

Yes

RESPONSE_FORMAT

Yes

Yes

Yes

Yes

DEVELOPER_MESSAGE

o1+ only

No

No

No

MAX_COMPLETION_TOKENS

o1+ only

No

No

No

REASONING_EFFORT

o1/o3

No

Mapped

Mapped

FREQUENCY_PENALTY

Yes

Yes

No

No

PRESENCE_PENALTY

Yes

Yes

No

No

PARALLEL_TOOL

Yes

Yes

Yes

Yes

circle-info
  • Parameters not supported by a provider are silently ignored

  • TEMPERATURE values above 1.0 are capped at 1.0 for Gemini and Anthropic (Mistral supports 0-2)

  • REASONING_EFFORT is mapped to provider-specific thinking/reasoning configurations

  • MAX_TOKENS is required for Anthropic (not optional).

  • When REASONING_EFFORT is used with Anthropic, TEMPERATURE and TOP_P are ignored (API requirement)

circle-exclamation

OpenAI reasoning models

OpenAI offers reasoning models (o1, o3, GPT-5 series) that use internal reasoning before generating a response. These models have specific parameter requirements:

Parameters

Parameter
Description

MAX_COMPLETION_TOKENS (required for reasoning models)

Specifies the maximum tokens for the completion, including both reasoning tokens and visible output tokens

REASONING_EFFORT (optional)

Controls the depth of reasoning: low, medium, or high Higher values use more reasoning tokens but may produce better results.

circle-exclamation

Output parameters

Parameter
Description

RESULT_REASONING_TOKENS

Number of tokens used for internal reasoning

Only populated for reasoning models; returns 0 for other models.

RESULT_CACHED_TOKENS

Number of prompt tokens served from cache (applies to all models with prompt caching)

Example of reasoning model use

Output

JSON schema use case

Using a JSON schema as a response format enforces the application to respond in a structured manner that aligns with the schema.

You can directly extract the returned values to populate specific data; simply specify the name of the property to extract as the parameter name and set the target data in OUT.

Examples

TOOLS

SELECTED_TOOLS_PARAMS

RESPONSE_FORMAT_JSON_SCHEMA

Resolution priority (strict, no cross-provider fallback)

The application resolves provider, URL, and API key in the following order:

Provider resolution

  1. PROVIDER parameter (if set and valid)

  2. Auto-detect from URL parameter (if set)

  3. Default: OpenAI

circle-info

Provider detection only uses the URL parameter, not configuration values.

URL resolution (based on resolved provider)

  1. URL parameter (if set)

  2. Provider-specific configuration URL:

    • OpenAI → OpenAIChatApiUrl

    • Mistral → MistralChatApiUrl

    • Gemini → GeminiChatApiUrl

    • Anthropic → AnthropicChatApiUrl

  3. Hardcoded default URL for provider

circle-info

No cross-provider fallback. Each provider uses only its own configuration URL.

API key resolution (based on resolved provider)

  1. API_KEY parameter (if set)

  2. Provider-specific configuration key:

    • OpenAI → OpenAIApiKey

    • Mistral → MistralApiKey

    • Gemini → GeminiApiKey

    • Anthropic → AnthropicApiKey

circle-info

No cross-provider fallback. Each provider uses only its own configuration key.

Common scenarios

Scenario
Provider
URL
API key

All defaults (nothing set)

OpenAI

OpenAIChatApiUrl or default

OpenAIApiKey

PROVIDER=anthropic

Anthropic

AnthropicChatApiUrl or default

AnthropicApiKey

PROVIDER=mistral

Mistral

MistralChatApiUrl or default

MistralApiKey

PROVIDER=gemini

Gemini

GeminiChatApiUrl or default

GeminiApiKey

URL=https://api.mistral.ai/...

Mistral (auto)

URL parameter

MistralApiKey

URL=https://api.anthropic.com/...

Anthropic (auto)

URL parameter

AnthropicApiKey

How to use each provider

Provider
Required configuration

OpenAI

Nothing (default) or set OpenAIApiKey in configuration

Mistral

Set PROVIDER=mistral OR URL param, AND MistralApiKey in configuration or API_KEY parameter

Gemini

Set PROVIDER=gemini OR URL param, AND GeminiApiKey in configuration or API_KEY parameter

Anthropic

Set PROVIDER=anthropic OR URL parameter, AND AnthropicApiKey in configuration or API_KEY parameter

Common examples

These examples work across all four providers, using parameters fully compatible with all of them.

Basic chat

Multi-turn conversation with MESSAGE_HISTORY

Tool/function calling

Provider-specific examples

OpenAI-specific examples

Reasoning models (o1/o3/GPT-5 series)

Output includes: RESULT_REASONING_TOKENS

Developer message (o1+ models)

Mistral AI-specific examples

Using Mistral endpoint

With explicit provider

circle-info

FREQUENCY_PENALTY and PRESENCE_PENALTY are supported.

Google Gemini-specific examples

circle-exclamation

Using Gemini OpenAI-compatible endpoint

With explicit provider

circle-info

FREQUENCY_PENALTY and PRESENCE_PENALTY are supported.

Anthropic Claude-specific examples

Basic Claude usage

Extended thinking (Claude)

Maps to Claude's thinking.budget_tokens parameter:

  • low → 1,024 tokens

  • medium → 8,192 tokens

  • high → 24,576 tokens

Claude with Vision

Last updated