Step 2: Configuring Agent Parameters
This step allows you to define foundational properties of your AI agent, including key metadata, LLM backend settings, safety controls, and memory preferences. Whether you are starting from scratch or customizing a template, you must define the core properties of your AI agent.
This step is ideal for experienced users who prefer full control over the agent and its orchestration.
Configuring agent parameters enables you to define a parent agent that orchestrates and manages the overall workflow.
Before creating an agent, follow the best practices to create an agent.
Configuring Parameters is divided into following sub-sections for clarity:
Configuring Profile
In this section define the agent’s basic metadata, including classification, name, and description, to establish its identity and organize it within the system.
| Option | Description |
| Profile | |
| *Indicates mandatory field | |
| Classification* | Assign a category to the agent based on its domain, function, or business unit. This helps in organizing, searching, and applying governance policies to agents. Use the searchable dropdown. Matching options appear as you type a few letters, allowing for quick and accurate selection. |
| Name* |
A unique and descriptive name for your agent (for example, "Invoice Processor," "IT Service Desk Agent"). |
| Description* | A brief explanation of the agent's purpose and what it aims to achieve. |
Configuring Large Language Model (LLM) Settings
This section outlines the procedure for selecting and configuring the Large Language Model (LLM) that your HCL UnO Agentic AI Builder solution will utilize. This involves choosing an LLM provider and then either applying existing configuration settings or defining new ones.
Steps to configure your LLM:
- Select LLM Type: Click the dropdown and select your desired LLM provider. Currently, only OpenAI and Google Vertex are available in the list.
-
Configure LLM Settings:Upon selecting an LLM type, a new dialog box titled [Selected LLM Type] - Select settings (for example, "OpenAI - Select settings") will appear. You have two options:
- Option A: Select Existing Settings: Click the drop-downchoose a pre-configured LLM setting from the available list. This will apply the previously saved parameters for that LLM.
- Option B: Create New Settings
- Click the + Create new settings link.
- A new dialog will appear, define the parameters for your new LLM configuration. These parameters will vary based on the selected LLM type (for example, API keys, model names, etc.).
- Fill in all the required fields.
- Click the Save Profile button to save your newly defined LLM settings.
- Once settings are selected or created, your chosen LLM configuration will be applied to the current context.
Note: Ensure you have the necessary credentials configured in the Credentials page page for the selected LLM type before attempting to create new settings that require authentication.
Configuring Guardrails
Use this section to define rules and safety mechanisms that restrict or shape the agent’s behavior. These are no-code configurations that help enforce content safety, compliance, and ethical standards.
| Option | Description |
| Generic | A text box where you can type safety or behavioral instructions to guide the LLM’s output. |
| Moderator | |
| Domain | Use this field to define the specific area or context in which the moderation guardrail will apply. For example, if you are building an agent for customer support, the domain might be "customer interactions" or "product inquiries." This helps to scope the moderation rules to relevant content. |
| Scoring criteria |
Use this field to define the specific criteria or rules your moderator agent should follow to evaluate content. You can set conditions like keywords to watch for, sentiment score thresholds, or patterns that indicate problematic content. |
| Scoring steps | Use this field to outline the sequence of actions or checks your moderator agent should perform based on the scoring criteria. This defines the moderation workflow or logic. For example: 1. Check for profanity. 2. Analyze sentiment. |
|
PII Detector/Anonymizer |
Toggle to enable automatic redaction of personally identifiable information. |
| Jailbreak detector | Toggle to prevent the agent from responding to attempts that try to bypass or manipulate its safety constraints (for example, prompt injection or malicious inputs). |
Configuring Memory (will override subagents's)
Define memory behavior for your agent. This setting overrides memory configurations of any subagents.
| Option | Description |
| Short Term (default) |
Retains memory only within the current session or context
window.
Note: This option is selected
by default and cannot be changed (the checkbox is disabled).
Users can additionally select the Long-Term memory option by
selecting the check box. |
| Long Term | Enables memory persistence across sessions (if supported). |