Google Vertex Profile

Use the Google Vertex Profile to configure and manage the connection between HCLUnO Agentic AI Builder and the Google Vertex AI platform. This allows your agents to securely access and utilize Google's powerful foundation models, such as the Gemini family, for advanced generative and predictive tasks.

Before you begin

  • You must have an active Google Cloud Project with the Vertex AI API enabled.

  • You must have a valid Google Credential configured in the Credential Library to authenticate this connection.

  • Ensure that all mandatory fields (marked with *) are completed accurately.

Table 1. Mandatory fields
Option Description
Name

A unique identifier for this configuration instance. This name will be used to reference this specific Google Vertex model setup in the Agentic AI Builder.

LLM Name

The specific authentication credential (previously created in the Credential Library) used to authorize the connection to your Google Cloud Project.

Table 2. Optional fields
Options Description
Model Name
The technical name of the model to use (for example, gemini-2.5-pro, gemini-2.5-flash).
Note:

The Model Name field is where you specify the exact technical identifier for the Large Language Model (LLM) or Small Language Model (SLM) to be used.

If Discover Models is checked in the corresponding Credentials account, available LLM models will populate a dropdown menu for selection.

If unchecked, the model list is not populated, and you must manually enter the model's exact technical name or identifier into the field.

Max Output Tokens The maximum number of tokens the model is allowed to generate in the output response.
Temperature The sampling temperature to use. Higher values mean the model will take more risks and generate more creative or varied output.
Top P The cumulative probability of token selection for nucleus sampling. It limits the token choices to the most probable set based on this threshold.
Top K The number of highest probability vocabulary tokens to keep for top-k filtering. The model samples from the $K$ most likely tokens for the next word.