Step 2: Configuring Agent Parameters

This step allows you to define foundational properties of your AI agent, including key metadata, LLM backend settings, safety controls, and memory preferences. Whether you are starting from scratch or customizing a template, you must define the core properties of your AI agent.

This step is ideal for experienced users who prefer full control over the agent and its orchestration.

Configuring agent parameters enables you to define a parent agent that orchestrates and manages the overall workflow.

Note:

Before creating an agent, follow the best practices to create an agent.

Configuring Parameters is divided into following sub-sections for clarity:

Configuring Information

In this section define the agent’s basic metadata, including classification, name, and description, to establish its identity and organize it within the system.

Option Description
Information
*Indicates mandatory field
Classification* Assign a category to the agent based on its domain, function, or business unit. This helps in organizing, searching, and applying governance policies to agents. Use the searchable dropdown. Matching options appear as you type a few letters, allowing for quick and accurate selection.
Name*

A unique and descriptive name for your agent (for example, "Invoice Processor," "IT Service Desk Agent").

Description* A brief explanation of the agent's purpose and what it aims to achieve.

Configuring LLM/SLM Provider and Profile

This section allows you to configure the Large Language Model (LLM) or Small Language Model (SLM) that your agent will use for generating or processing responses.

You can select an existing provider and profile, or create a new profile to define credentials and model configurations.

Steps to configure your LLM/SLM:

  1. Open the LLM/SLM Configuration Panel

  2. Select Provider

    • From the Provider dropdown, choose the desired provider.

    • For example: AwsBedrock.

      You can select providers such as AWSBedrock, Azure, Google Vertex, Nvidia, Ollama, OpenAI that are available in the list.
    • The selected provider determines the available model options and credential types.

  3. Select Profile

    • Click Select next to the Profile field to open the profile selection panel.

    • The panel displays a list of existing profiles associated with the selected provider.

  4. Choose or Create Profile

    You can either:

    • Option A: Select an existing profile from the list.

      Each profile card displays:

      • Profile Name

      • Credentials: The linked authentication credentials.

      • Model: The model type associated with the provider (for example, AWS_BEDROCK).

        Note: The profile panel includes a search bar for quick lookup.
    • Option B: Create a new profile:

      • Click New profile

      • In the Credential to connect with field, select an existing credential or click Add new credential. For details, see the Adding Credentials section of Credentials page.

      • Fill in the required configuration parameters for your new profile.

      • Click Create, to save the profile.

  5. Confirm Selection

    • Once you select or create a profile, the profile name appears in the Profile field.

    • The configuration will now apply to the current agent.

    Note: The profile you select here (LLM or SLM) will be set as the default profile for the agent in subsequent steps and processes.

Configuring Guardrails

Use this section to define rules and safety mechanisms that restrict or shape the agent’s behavior. These are no-code configurations that help enforce content safety, compliance, and ethical standards.

Option Description
Add guardrails
Reset Appears only after a guardrail is added. Click Reset to remove the applied guardrail and revert the configuration to its default state.
Generic A text box where you can type safety or behavioral instructions to guide the LLM’s output.
Moderator
Note: This will only function when all three fields—Domain, Scoring Criteria, and Scoring Steps—are provided.
Domain Use this field to define the specific area or context in which the moderation guardrail will apply. For example, if you are building an agent for customer support, the domain might be "customer interactions" or "product inquiries." This helps to scope the moderation rules to relevant content.
Scoring criteria

Use this field to define the specific criteria or rules your moderator agent should follow to evaluate content. You can set conditions like keywords to watch for, sentiment score thresholds, or patterns that indicate problematic content.

Scoring steps Use this field to outline the sequence of actions or checks your moderator agent should perform based on the scoring criteria. This defines the moderation workflow or logic. For example: 1. Check for profanity. 2. Analyze sentiment.

PII Detector/Anonymizer

Toggle to enable automatic redaction of personally identifiable information.

Jailbreak detector Toggle to prevent the agent from responding to attempts that try to bypass or manipulate its safety constraints (for example, prompt injection or malicious inputs).

Configuring Memory (will override subagents's)

Define memory behavior for your agent. This setting overrides memory configurations of any subagents.

Option Description
Short Term (default)
Retains memory only within the current session or context window.
Note: This option is selected by default and cannot be changed (the checkbox is disabled). Users can additionally select the Long-Term memory option by selecting the check box.
Long Term Enables memory persistence across sessions (if supported).