WorkflowGen Documentation
10.0
10.0
  • WorkflowGen Administration Guide
  • Administration Module Overview
  • Configuration Panel
  • User Management
    • Directories
    • Users
    • Groups
    • Delegations
  • Directory Synchronization
  • Global Information
    • Participants
    • Applications
    • Categories
    • Global Lists
  • Process Definition
    • Editing
    • Participants
    • Data
  • Form
    • Structure, Sections & Fields
    • Appearance, Behavior & Mapping
    • Management
  • Workflow
    • Actions
    • Parameters
    • Conditions & Exceptions
    • Notifications
  • Reports
  • Remote Approval
  • Custom Notification Templates
  • Macros
  • WorkflowGen URLs
  • Custom Menus
  • Portlet
  • Error Messages
  • Workflow Applications
    • EFORMASPX
    • XMLTODATABASE
    • GETUSERSFROMDIR
    • XMLTRANS
    • RAISEEXCEPTION
    • UPDATEPROCESSDATA
    • STARTPROCESS
    • COMPLETEACTION
    • GETPROCESSDATA
    • GETFORMDATA
    • CANCELREQUEST
    • MERGEFORM
    • COPYDATA
    • SENDMESSAGE
    • SENDHTTPPOST
    • RESTAPICLIENT
    • EXECSQL
    • JSONTODATA
    • DocuSign
      • DOCUSIGNSEND
      • DOCUSIGNCHECK
    • Adobe Sign
      • ADOBESIGNSEND
      • ADOBESIGNCHECK
    • Docaposte
      • DOCAPOSTESEND
      • DOCAPOSTECHECK
    • Yousign
      • YOUSIGNSEND
      • YOUSIGNCHECK
    • OpenAI
      • OPENAITRANSCRIPTION
      • OPENAICHAT
      • OPENAIADDMESSAGE
    • COUNTER
    • GETAGENTTOOLDATA
    • GETAGENTTOOL
Powered by GitBook
On this page
  • Overview
  • How it works
  • Required parameters
  • With system/user messages
  • With a message number
  • With a JSON message array
  • Optional parameters
  • JSON schema use case
  • Examples
  • TOOLS
  • SELECTED_TOOLS_PARAMS
  • RESPONSE_FORMAT_JSON_SCHEMA
Export as PDF
  1. Workflow Applications
  2. OpenAI

OPENAICHAT

Overview

The OPENAICHAT workflow application lets you interact with an OpenAI chat model.

How it works

  • The application allows you to interact with OpenAI completion models.

  • Application logs are available. These can be specified by setting the value of the OpenAIChatLogLevel parameter in the web.config file to 0 to deactivate logs, 1 for error logs, 2 for information logs, or 3 for debug logs; the default value is 0.

Required parameters

Parameter
Type
Direction
Description

MODEL

TEXT

IN

You can use either of the following configurations: with system/user messages, with a message number, or with a JSON message array.

With system/user messages

Parameter
Type
Direction
Description

SYSTEM_MESSAGE

TEXT

IN

The system message content

USER_MESSAGE

TEXT

IN

The user message content

With a message number

Parameter
Type
Direction
Description

MESSAGE_ROLEx

TEXT

IN

The type of the message, where x corresponds to the message number; the value should be assistant, system, or user

MESSAGE_CONTENTx

TEXT

IN

The user message content, where x corresponds to the message number

With a JSON message array

Parameter
Type
Direction
Description

MESSAGE_JSON

TEXT

IN

The JSON array message object; the structure should match the following:

Optional parameters

Parameters
Type
Direction
Description

API_KEY

TEXT

IN

OpenAI API key By default, this value comes from the OpenAIApiKey parameter in the web.config file.

URL

TEXT

IN

API endpoint; defaults to https://api.openai.com/v1/audio/transcriptions

TEMPERATURE

NUMERIC

IN

Sampling temperature, between 0 and 1; defaults to 1

Higher values (e.g. 0.8) will make the output more random, while lower values (e.g. 0.2) will make it more focused and deterministic.

TOP_P

NUMERIC

IN

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. Therefore, 0.1 means only the tokens comprising the top 10% probability mass are considered. Defaults to 1

FREQUENCY_PENALTY

NUMERIC

IN

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Defaults to 0

MAX_TOKENS

NUMERIC

IN

Maximum number of tokens that can be generated in the chat completion; defaults to 256

PRESENCE_PENALTY

NUMERIC

IN

Number between -2.0 and 2.0; defaults to 0

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

RESPONSE_FORMAT

TEXT

IN

Format of the response: json_object, text, or json_schema

When the value is json_object, the system prompt should contain the JSON keyword. When the value is json_schema, the expected schema must be provided in the RESPONSE_FORMAT_JSON_SCHEMA parameter.

RESPONSE_FORMAT_JSON_SCHEMA

TEXT

IN

APP_RESPONSE_IGNORE_ERROR

TEXT

IN

Specifies (Y or N) if error should be ignored; defaults to N

In case of error, if the parameter has Y as its value, the error will be ignored and defined OUT parameters (APP_RESPONSE_STATUS or APP_RESPONSE_CONTENT) will be mapped. Otherwise, an exception will be thrown.

TOOLS

TEXT

IN

TOOL_CHOICE_REQUIRED

TEXT

IN

Specifies whether the model must necessarily choose a tool. Values: Y or N (default).

PARALLEL_TOOL

TEXT

IN

Specifies whether the model can choose multiple tools. Values: Y (default) or N.

MESSAGE_HISTORY

TEXT

INOUT

SELECTED_TOOL

TEXT

OUT

The list of selected tool names, separated by commas

SELECTED_TOOL_PARAM

TEXT

OUT

SELECTED_TOOLS_COUNT

TEXT

OUT

The number of selected tools

RESULT

TEXT

OUT

Chat result call

RESULT_CONTENT

TEXT

OUT

Content of the assistant message

RESULT_TOTAL_TOKENS

NUMERIC

OUT

Total of tokens used for generation

RESULT_COMPLETION_TOKENS

NUMERIC

OUT

Total of tokens used for generation

RESULT_PROMPT_TOKENS

NUMERIC

OUT

Total of token used for the prompt

APP_RESPONSE_STATUS

TEXT

OUT

Response status code

APP_RESPONSE_CONTENT

TEXT

OUT

Response payload or error message

JSON schema use case

Using a JSON schema as a response format enforces the application to respond in a structured manner that aligns with the schema.

You can directly extract the returned values to populate specific data; simply specify the name of the property to extract as the parameter name and set the target data in OUT.

Examples

TOOLS

[
    {
        "name": "GET_STOCK_INFORMATION",
        "description": "Get stock information about a product. If the product is not found, return an error. If the stock is less than 10, return a warning and a purchase order should be done.",      
        "parameters": {
            "type": "object",
            "properties": {
                "product_name": {
                    "type": "string",
                    "description": "The product name"
                },
                "serial_number": {
                    "type": "string",
                    "description": "The product serial number"
                }
            },
            "additionalProperties": false,
            "required": ["serial_number"]
        }
    },
    {
        "name": "PURCHASE_ORDER",
        "description": "Make a purchase order for a product.",    
        "parameters": {
            "type": "object",
            "properties": {
                "product_name": {
                    "type": "string",
                    "description": "The product name"
                },
                "serial_number": {
                    "type": "string",
                    "description": "The product serial number"
                },
                "quantity": {
                    "type": "number",
                    "description": "The quantity of the product to purchase"
                }
            },
            "additionalProperties": false,
            "required": ["serial_number", "quantity"]
        }
    }
]

SELECTED_TOOLS_PARAMS

[
    {
        "name": "GET_STOCK_INFORMATION",
        "id": "call_Vuc2Ga8jP7vUksxG9C0fwpY8",
        "parameters": {
            "product_name": "vis",
            "serial_number": "V45645"
        }
        
    },
    {
        "name": "GET_STOCK_INFORMATION",
        "id": "call_nq3SCVUk0FjAHCeqOZGNXpC8",
        "parameters": {
            "product_name": "boulons",
            "serial_number": "b456"
        }
    }
]

RESPONSE_FORMAT_JSON_SCHEMA

{
  "name": "schema",
  "schema": {
    "$schema": "http://json-schema.org/draft-04/schema#",
    "type": "object",
    "properties": {
      "advice": {
        "type": "string"
      },
      "next_action": {
        "type": "string",
        "enum": ["expert","sales","support","logistics",null]
      },
      "confidence_level": {
        "type": "number"
      }
    },
    "required": ["advice", "next_action","confidence_level"]
  }
}

PreviousOPENAITRANSCRIPTIONNextOPENAIADDMESSAGE

ID of the model to use You can find available models at the following link: ; the endpoint used by default is /v1/chat/completions.

The JSON schema that will be used by the model to respond. See the section below for an example.

List of tools available to the model, formatted in JSON and compliant with OpenAI's format: . See the section below for an example.

The message history in JSON format. The reference structure follows OpenAI's documentation for the messages object: .

A JSON array representing the list of selected tools along with their parameters. See the section below for an example output.

[
    {
        "role": "assistant/system/user",
        "content": "First message content"
    },
    {
        "role": "assistant/system/user",
        "content": "Second message content"
    }
]
https://platform.openai.com/docs/models/model-endpoint-compatibility
https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages
RESPONSE_FORMAT_JSON_SCHEMA
https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools
TOOLS
SELECTED_TOOLS_PARAMS