Step 2: Configuring Agent Parameters

This step allows you to define foundational properties of your AI agent, including key metadata, LLM backend settings, safety controls, and memory preferences. Whether you are starting from scratch or customizing a template, you must define the core properties of your AI agent.

This step is ideal for experienced users who prefer full control over the agent and its orchestration.

Configuring agent parameters enables you to define a parent agent that orchestrates and manages the overall workflow.

Note:

Before creating an agent, follow the best practices to create an agent.

Configuring Parameters is divided into following sub-sections for clarity:

Configuring Information

In this section define the agent’s basic metadata, including classification, name, and description, to establish its identity and organize it within the system.

Option Description
Information
*Indicates mandatory field
Classification* Assign a category to the agent based on its domain, function, or business unit. This helps in organizing, searching, and applying governance policies to agents. Use the searchable dropdown. Matching options appear as you type a few letters, allowing for quick and accurate selection.
Name*

A unique and descriptive name for your agent (for example, "Invoice Processor," "IT Service Desk Agent").

Description* A brief explanation of the agent's purpose and what it aims to achieve.

Configuring LLM/SLM Provider and Profile

This section allows you to configure the Large Language Model (LLM) or Small Language Model (SLM) that your agent will use for generating or processing responses.

You can select an existing provider and profile, or create a new profile to define credentials and model configurations.

Steps to configure your LLM/SLM:

  1. Open the LLM/SLM Configuration Panel

  2. Select Provider

    • From the Provider dropdown, choose the desired provider.

    • For example: AwsBedrock.

      You can select providers such as AWSBedrock, Azure, Google Vertex, Nvidia, Ollama, OpenAI that are available in the list.
    • The selected provider determines the available model options and credential types.

  3. Select Profile

    • Click Select next to the Profile field to open the profile selection panel.

    • The panel displays a list of existing profiles associated with the selected provider.

  4. Choose or Create Profile

    You can either:

    • Option A: Select an existing profile from the list.

      Each profile card displays:

      • Profile Name

      • Credentials: The linked authentication credentials.

      • Model: The model type associated with the provider (for example, AWS_BEDROCK).

        Note: The profile panel includes a search bar for quick lookup.
    • Option B: Create a new profile:

      • Click New profile

      • In the Credential to connect with field, select an existing credential or click Add new credential. For details, see the Adding Credentials section of Credentials page.

      • Fill in the required configuration parameters for your new profile.

      • Click Create, to save the profile.

  5. Confirm Selection

    • Once you select or create a profile, the profile name appears in the Profile field.

    • The configuration will now apply to the current agent.

    Note: The profile you select here (LLM or SLM) will be set as the default profile for the agent in subsequent steps and processes.

Configuring Guardrails

Use this section to define rules and safety mechanisms that restrict or shape the agent's behavior. These no-code configurations enforce content safety, compliance, and ethical standards by acting as a secure proxy between the user and the LLM.

Option Description
Add guardrails
Reset Appears only after a guardrail is added. Click Reset to remove the applied guardrail and revert the configuration to its default state.
Generic A text box where you can type safety or behavioral instructions to guide the LLM’s output.
Moderator

Evaluates content based on a defined Domain, Scoring Criteria, and Scoring Steps. All three fields must be configured for the moderator to function.

Domain Use this field to define the specific area or context in which the moderation guardrail will apply. For example, if you are building an agent for customer support, the domain might be "customer interactions" or "product inquiries." This helps to scope the moderation rules to relevant content.
Scoring criteria

Use this field to define the specific criteria or rules your moderator agent should follow to evaluate content. You can set conditions like keywords to watch for, sentiment score thresholds, or patterns that indicate problematic content.

Scoring steps Use this field to outline the sequence of actions or checks your moderator agent should perform based on the scoring criteria. This defines the moderation workflow or logic. For example: 1. Check for profanity. 2. Analyze sentiment.
Other Options

PII Detector/Anonymizer

Toggle to enable automatic redaction of personally identifiable information (PII) before it reaches the model. For more details refer PII Detector/Anonymizer section.

Jailbreak detector Toggle to prevent the agent from responding to attempts that try to bypass or manipulate its safety constraints (for example, prompt injection or malicious inputs). For more details refer Jailbreak detector section.

PII Detector/Anonymizer

The PII Detector helps identify and redact sensitive information before it is processed by the Agent or passed to Sub‑Agents. This ensures data isolation and prevents accidental handling of regulated or private user data.

The system detects personally identifiable information using two techniques:

  • Regex‑based detection for structured identifiers (pattern matching) (for example, IBAN, Credit Card, Crypto Wallet, SSN, URL, Email, MAC Address, IP Address, Fiscal Code, Passport, Identity Card, Driver License, PAN, Aadhaar, Vehicle Reg, Voter ID, Passport, Phone Number and so on).
  • NLP‑based detection for contextual entities (for example, CARDINAL, DATE, EVENT, FAC, GPE, LANGUAGE, LAW, LOC, MONEY, NORP, ORDINAL, ORG, PERCENT, PERSON, PRODUCT, QUANTITY, TIME, WORK_OF_ART, and so on).

The detector supports sanitization and redaction across:

  • Direct Text Inputs: Redacts sensitive content from user chat or instructions.

  • Documents: Extracts and removes sensitive data during document parsing.

Note:
  • Language Requirement: The detector is currently optimized for English. User prompts and document content must be in English to be correctly classified and neutralized.

Jailbreak and Prompt Injection Detector

The Jailbreak/Prompt Injection Detector provides a critical security boundary by scanning all incoming data before it reaches the sub-agents.

The system employs a two-step classification and reaction process to ensure Data Isolation:

  1. Detection and Scanning
    • Direct Text Input: Real-time scanning of all user-entered prompts.

    • Document Analysis: Scans extracted text from uploaded files to identify hidden instructions or "indirect" injection attempts buried within documents.

  2. System Response (The Redaction Flow)

    When a threat or policy‑violating input is detected, the system automatically triggers the Redaction Flow to ensure safe execution.

    • If a threat is detected, the original malicious input is redacted and replaced with a default system message: "I can’t help with that request."

    • The overall agent flow is terminated to prevent the execution of malicious or unsafe instructions.

    • A new conversation context is required to continue using the agent safely.

Data Isolation and Security

To maintain a secure processing environment, the original 'clear-text' of a malicious prompt is never exposed to the sub-agent. The guardrail serves as a safety filter, sanitizing the input and passing only the redacted message to the downstream workflow.

Note:
  • Language Requirement: The detector is currently optimized for English. User prompts and document content must be in English to be correctly classified and neutralized.

  • Architecture Prerequisite: While this toggle can be enabled for any agent, a multi-agent or sub-agent architecture is required to observe the full "Data Isolation" effect, where individual tasks are partitioned for maximum security.

Configuring Memory (will override subagents's)

Define memory behavior for your agent. This setting overrides memory configurations of any subagents.

Option Description
Short Term (default)
Retains memory only within the current session or context window.
Note: This option is selected by default and cannot be changed (the checkbox is disabled). Users can additionally select the Long-Term memory option by selecting the check box.
Long Term Allows the agent to retain and retrieve information across different sessions.
Note: Retention Policy: Records in long-term memory are retained for 30 days. This retention period automatically refreshes each time a record is accessed, ensuring frequently used data remains available.

Configuring Execution Limits

The Execution Limits section allows you to control how long an agent is allowed to run before it automatically stops. This prevents the agent from running indefinitely when dealing with complex or recursive workflows.

The agent automatically stops once the iteration limit is reached, even if additional steps are pending. You can increase or decrease this value based on the complexity of your agent’s logic.

Option Description
Agent Max Iterations

Define the maximum number of times the agent can cycle through the graph (reasoning, tool use, and observation) before it must stop and provide its best possible answer.

Note: The default number of iterations is 25.