Nvidia Profile

Use the Nvidia Profile to configure and manage the connection between HCL UnO Agentic AI Builder and the high-performance AI Foundation Models and microservices available through the Nvidia platform. This profile enables your agents to leverage models optimized for speed and scale in enterprise environments.

Before you begin

  • You must have access to the Nvidia AI Foundation Models or the corresponding API service.

  • You must have a valid Nvidia Credential configured in the Credential Library to authenticate this connection.

  • Ensure that all mandatory fields (marked with *) are completed accurately.

Table 1. Mandatory fields
Option Description
Name

A unique identifier for this configuration instance. This name will be used to reference this specific Nvidia model setup in the Agentic AI Builder.

LLM Name

The specific authentication credential (previously created in the Credential Library) used to authorize the connection to the Nvidia service.

Table 2. Optional fields
Options Description
Model Name
The technical name of the model to use (for example, llama2-70b, mixtral-8x7b).
Note:

The Model Name field is where you specify the exact technical identifier for the Large Language Model (LLM) or Small Language Model (SLM) to be used.

If Discover Models is checked in the corresponding Credentials account, available LLM models will populate a dropdown menu for selection in the LLM Name field.

Max Tokens The maximum number of tokens the model is allowed to generate in the output response.
Temperature The sampling temperature to use. Higher values mean the model will take more risks and generate more creative or varied output.
Top P Nucleus sampling parameter. It limits the token choices to the most probable set based on a cumulative probability threshold.
Seed The random seed for reproducibility. Using the same seed ensures that the model generates the exact same output for the same input and parameters across different runs (deterministic results).