Prerequisites for setting up AI providers for Loop Genie

When you want to configure an AI provider such as OpenAI, IBM watsonx, Gemini, Claude or Ollama with HCL DevOps Loop, ensure that the prerequisites are met.

Requirements for OpenAI integration

Ensure the following steps are completed before integrating OpenAI with DevOps Loop:
  1. Create an OpenAI account: Sign up on OpenAI’s platform and obtain API access.
  2. Generate an API key: Navigate to the OpenAI dashboard and create a secret API key for authentication.
  3. Install OpenAI library: Use Python or another programming language to install the OpenAI package.

Requirements for Claude Desktop integration

Ensure the following steps are completed before integrating Claude Desktop with DevOps Loop:
  1. Create an Claude account: Sign up on Claude’s platform and obtain API access.
  2. Generate an API key: Navigate to the Claude dashboard and create a secret API key for authentication.
  3. Install Claude library: Use Python or another programming language to install the Claude Desktop AI package.

Requirements for Gemini integration

Ensure the following steps are completed before integrating Gemini with DevOps Loop:
  1. Create an Gemini account: Sign up on Gemini’s platform and obtain API access.
  2. Generate an API key: Navigate to the Gemini dashboard and create a secret API key for authentication.
  3. Install Gemini library: Use Python or another programming language to install the Gemini package.

Requirements for IBM watsonx integration

Ensure that you have the following details:
  1. API Key
  2. Project ID
  3. Endpoint URL

Requirements for Ollama integration

Ensure the following steps are completed before integrating Ollama with DevOps Loop:
  1. Install Ollama: Download the installer and install it.

    You must have read and understood the system requirements for each model of Ollama. for more information, refer to https://ollama.com.

  2. Download a model: Download the required large language model (LLM) such as Llama 2 or Mistral.
  3. Set up Python environment: Create and activate a virtual environment and install dependencies.
    Note:
    If you are using the Ollama instance that is bundled with DevOps Loop, you must follow these steps:
    1. Enable the instance in the Helm chart by setting the value to true in the values.yaml file:
      llama: 
          enabled: true
              
    2. Specify the required large language model (LLM) name in the Helm chart by providing the values in the values.yaml file:
      llama: 
          ollama: 
              models: 
              pull: 
                  - <model name> 
                  - <model name> 
              run: 
                  - <model name> 
                  - <model name>
      Where, the pull command is used to download a model, and the run command is used to run the model.