Integrations - Tech Preview

Disclaimer:

This release contains access to the Loop Genie feature in HCL DevOps Loop as a Tech Preview. The Tech Preview is intended for you to view the capabilities of Loop Genie offered by HCL DevOps Loop, and to provide your feedback to the product team. You are permitted to use the information only for evaluation purposes and not for use in a production environment. HCL provides the information without obligation of support and "as is" without warranty of any kind.

You can integrate AI providers such as OpenAI and Ollama with DevOps Loop. AI integration with the platform unlocks advanced data analysis capabilities, providing deeper insights into project details. After the integration is complete, you can query the system to assess project resources, track progress, and gain an understanding of the project status for better decision-making and optimizied workflows.

When you integrate one of the AI providers with the platform and create a loop, Loop Genie is enabled for you to send your queries. Loop Genie currently offers a powerful search and summarize capability, using which you can explore issue collections and other data indexed through OpenSearch. The results are processed by using AI, laying the groundwork for future enhancements like expanded prompts and data visualizations. The search and summarize function maintains conversational memory until a new query begins. Currently, I want to search feature is available to set the context for your conversation when you send queries. The ask the question feature is under development. Loop Genie is accessible across all service tiers.

Integrating OpenAI or Ollama into the platform involves the following key steps:
S/N OpenAI Ollama
1. Create an OpenAI account: Sign up on OpenAI’s platform and obtain API access. Install Ollama: Download the installer and install it.

You must have read and understood the system requirements for each model of Ollama. for more information, refer to https://ollama.com.

2. Generate an API key: Navigate to the OpenAI dashboard and create a secret API key for authentication. Download a model: Download the required large language model (LLM) such as Llama 2 or Mistral.
3. Install OpenAI library: Use Python or another programming language to install the OpenAI package. Set up Python environment: Create and activate a virtual environment and install dependencies.
4. Set up authentication: Configure your application to use the API key for secure access.
Note:
If you are using the Ollama instance that is bundled with DevOps Loop, you must follow these steps:
  1. Enable the instance in the Helm chart by setting the value to true in the values.yaml file:
    llama: 
        enabled: true
            
  2. Specify the required large language model (LLM) name in the Helm chart by providing the values in the values.yaml file:
    llama: 
        ollama: 
            models: 
            pull: 
                - <model name> 
                - <model name> 
            run: 
                - <model name> 
                - <model name>
    Where, the pull command is used to download a model, and the run command is used to run the model.