AI provider setup requirements
When you want to integrate an AI provider such as OpenAI, Ollama, or IBM watsonx.ai with HCL DevOps Loop, ensure that the following prerequisites are met.
Requirements for OpenAI integration
Ensure the following steps are completed before integrating OpenAI with DevOps Loop:
- Create an OpenAI account: Sign up on OpenAI’s platform and obtain API access.
- Generate an API key: Navigate to the OpenAI dashboard and create a secret API key for authentication.
- Install OpenAI library: Use Python or another programming language to install the OpenAI package.
Requirements for Ollama integration
Ensure the following steps are completed before integrating Ollama with DevOps Loop:
- Install Ollama: Download the installer and install it.
You must have read and understood the system requirements for each model of Ollama. for more information, refer to https://ollama.com.
- Download a model: Download the required large language model (LLM) such as Llama 2 or Mistral.
- Set up Python environment: Create and activate a virtual environment
and install dependencies.Note:If you are using the Ollama instance that is bundled with DevOps Loop, you must follow these steps:
- Enable the instance in the Helm chart by setting the
value to true in the
values.yaml
file:
llama: enabled: true - Specify the required large language model (LLM) name in
the Helm chart by providing the values in the
values.yaml
file:
Where, the pull command is used to download a model, and the run command is used to run the model.llama: ollama: models: pull: - <model name> - <model name> run: - <model name> - <model name>
- Enable the instance in the Helm chart by setting the
value to true in the
values.yaml
file:
Requirements for IBM watsonx integration
Ensure that you have the following details:
- API Key
- Project ID
- Endpoint URL
