Ollama Credentials

Use Ollama Credentials to establish a secure connection between HCL UnO Agentic AI Builder solutions and a locally or remotely hosted Ollama server. This allows your AI agents to interact with community-driven LLMs or Small Language Models (SLMs) managed by the Ollama framework, offering flexibility for specialized or resource-constrained deployments.

Before you begin

Ensure the following prerequisites are met:

  • You must have an Ollama server instance running and accessible from your network.

  • You need to have pulled and available the specific LLM/SLM models (for example, llama2, mixtral) you intend to use on the Ollama server.

  • The credentials configured here will typically include the server endpoint URL and any required API token for authentication.

Table 1. Mandatory fields
Option Description
Credential Name

A unique identifier for this Ollama credential instance (Example: Local_Ollama_Server).

Base URL The Base URL for the Ollama API server. This is the network address where your self-hosted Ollama instance can be reached.
Table 2. Optional fields
Options Description
Api Key

The API Key or Bearer token used for authenticating with the Ollama proxy or server, if authentication is required.

Header Name The name of the HTTP header used to carry the API key (Example: Authorization, X-API-KEY).
Prefix An optional prefix for the API key (Example: Bearer). Leave empty if the key or token is sent alone in the header.
Ssl Verify A checkbox to toggle SSL verification. If checked, the system will verify the SSL certificate of the Ollama server endpoint.
Discover Models Param A checkbox that, when checked, enables the system to automatically to fetch available models from the Ollama server and display them in the LLM name dropdown.