OpenAI Profile
Use the OpenAI Profile to configure the connection between HCL UnO Agentic AI Builder and the OpenAI API. This provides your agents with secure access to state-of-the-art foundation models like GPT-4, enabling advanced reasoning, complex content generation, and structured data tasks.
Before you begin
-
You must have an active OpenAI account and a valid API Key.
-
You must have a corresponding OpenAI Credential configured in the Credential Library (containing the API key).
-
Ensure that all mandatory fields (marked with *) are completed accurately.
| Option | Description |
|---|---|
| Name |
A unique, human-readable identifier for this specific OpenAI configuration instance (for example, GPT4_Production_Chat). |
| LLM Name |
The specific authentication credential (API key) previously created in the Credential Library that authorizes the connection to the OpenAI API. |
| Options | Description |
|---|---|
| Model Name |
The technical name of the model to use (for example,
gpt-4o,
gpt-3.5-turbo).Note:
The Model Name field is where you specify the exact technical identifier for the Large Language Model (LLM) or Small Language Model (SLM) to be used. If Discover Models is checked in the corresponding Credentials account, available LLM models will populate a dropdown menu for selection. If unchecked, the model list is not populated, and you must manually enter the model's exact technical name or identifier into the field. |
| Max Tokens | The maximum number of tokens the model is allowed to generate in the output response. |
| Temperature | The sampling temperature to use. Higher values mean the model will take more risks and generate more creative or varied output. |
| Presence Penalty | Penalizes the likelihood of a token appearing in the response based on whether that token has appeared at least once so far. This encourages the model to introduce new concepts. |
| Frequency Penalty | Penalizes the likelihood of a token appearing based on how frequently it has already occurred. This helps reduce repetition of the same words and promotes vocabulary diversity. |
| Request Timeout | The maximum time, in seconds, the system will wait for a response from the OpenAI API before cancelling the request and initiating an error or a retry. |
| Top P | Nucleus sampling parameter. It limits the token choices to the most probable set based on a cumulative probability threshold. |
| Max Retries | The maximum number of times the system will automatically retry a request in case of transient errors (like HTTP 429 Rate Limits or 500/503 server errors). |