AI provider setup requirements

When you want to integrate an AI provider such as OpenAI, Ollama, or IBM watsonx.ai with HCL DevOps Loop, ensure that the following prerequisites are met.

Requirements for OpenAI integration

Ensure the following steps are completed before integrating OpenAI with DevOps Loop:
  1. Create an OpenAI account: Sign up on OpenAI’s platform and obtain API access.
  2. Generate an API key: Navigate to the OpenAI dashboard and create a secret API key for authentication.
  3. Install OpenAI library: Use Python or another programming language to install the OpenAI package.

Requirements for Ollama integration

Ensure the following steps are completed before integrating Ollama with DevOps Loop:
  1. Install Ollama: Download the installer and install it.

    You must have read and understood the system requirements for each model of Ollama. for more information, refer to https://ollama.com.

  2. Download a model: Download the required large language model (LLM) such as Llama 2 or Mistral.
  3. Set up Python environment: Create and activate a virtual environment and install dependencies.
    Note:
    If you are using the Ollama instance that is bundled with DevOps Loop, you must follow these steps:
    1. Enable the instance in the Helm chart by setting the value to true in the values.yaml file:
      llama: 
          enabled: true
              
    2. Specify the required large language model (LLM) name in the Helm chart by providing the values in the values.yaml file:
      llama: 
          ollama: 
              models: 
              pull: 
                  - <model name> 
                  - <model name> 
              run: 
                  - <model name> 
                  - <model name>
      Where, the pull command is used to download a model, and the run command is used to run the model.

Requirements for IBM watsonx integration

Ensure that you have the following details:
  1. API Key
  2. Project ID
  3. Endpoint URL