Enabling and disabling WCM content AI analysis
This document outlines configurations to enable and disable artificial intelligence (AI) analysis for Web Content Management (WCM) content in a Kubernetes deployment using the values.yaml file. You can also configure a content AI provider to be used for AI analysis.
WCM Content AI configuration
Refer to the following sample snippet for configuring the Digital Experience (DX) WebEngine server to enable Content AI analysis:
configuration:
#Configuration for webEngine
webEngine:
contentAI:
className: com.ai.sample.CustomerAI
enabled: true
provider: OPEN_AI
security:
# Security configuration for webEngine
webEngine:
webEngineContentAIProviderAPIKey: ""
customWebEngineContentAISecret: custom-credentials-webengine-ai-secret
To disable Content AI Analysis, refer to the following sample configuration:
configuration:
#Configuration for webEngine
webEngine:
contentAI:
enabled: true
provider: ""
Content AI configuration parameters
In the provided configuration, the following parameters are used:
className: Provide the custom AI class name. The default value iscom.hcl.workplace.wcm.restv2.ai.ChatGPTAnalyzerServiceif the AI analysis is enabled with providerOPEN_AI.enabled: Set totrueto enable content AI or tofalseto disable. By default, this parameter is set tofalse.provider: If content AI is enabled, provide the content AI provider in this parameter. Valid values areOPEN_AIandCUSTOM.webEngineContentAIProviderAPIKey: Enter the API key for the AI Provider. The AI provider provides an API key to access its API.customWebEngineContentAISecret: Provide a secret name that will be used to set the AI API Key.
To create a custom secret, run the following command:
kubectl create secret generic WEBENGINE_AI_CUSTOM_SECRET --from-literal=apiKey=API_KEY --namespace=NAME_SERVER
Replace API_KEY and NAME_SERVER with the actual values. For example:
kubectl create secret generic custom-credentials-webengine-ai-secret --from-literal=apiKey=your-API-Key --namespace=dxns
Note
If a custom secret is used instead of an API key directly in the values.yaml file, then you must create the custom secret using the content AI provider's API key. You must then refer to the secret name in the customWebEngineContentAISecret property and leave the webEngineContentAIProviderAPIKey blank and vice versa.
Validation
After updating the values.yaml file, perform the following actions:
- If running the server for the first time, refer to Installing WebEngine.
- If upgrading previous configurations, refer to Upgrading the Helm deployment.
After enabling the Content AI analysis, refer to the steps in WCM REST V2 AI Analysis API to call the AI Analyzer APIs of the configured Content AI Provider.
OPEN_AI provider configuration
If you are using the bundled OPEN_AI provider, you can configure its behavior using additional properties in the Helm values.yaml file. These must be set in the property overrides for WCMConfigService.properties. For example:
configuration:
webEngine:
propertiesFilesOverrides:
WCMConfigService.properties:
OPENAI_SCHEME: "http"
The available properties are:
OPENAI_MODEL: The currently supported AI model isgpt-4o. However, AI model can be overriden by overriding this property.OPENAI_MAX_TOKENS: Set a positive integer value for GPT-3 models liketext-davinci-003. It specifies the maximum number of tokens that the model can output in its response and defaults to256.OPENAI_TEMPERATURE: Set positive float values ranging from0.0to1.0. This parameter in OpenAI's GPT-3 API controls the randomness and creativity of the generated text. Higher values produce more diverse and random output. Lower values produce more focused and deterministic output.OPENAI_HOST: The host to connect to for AI calls, defaults toapi.openai.com. Configuring this could allow you to connect to a different service that offers an OpenAI-compatible API, such as LiteLLM.OPENAI_SCHEME: The scheme which AI calls will use, defaults tohttps.