Unica Link Integration
This document explains how HCL Link integrates with the HCL Unica platform. It provides an overview of deployment patterns and key configuration sections in the HCL Link Helm chart.
Overview
HCL Link provides data transformation and connectivity services for the HCL Unica platform. When integrated, Unica Journey can trigger Link flows to interact with external systems. The Unica platform can also use custom connectors managed by Link.
Integration Methods
Integration is achieved in two main ways:
-
For Unica Journey:
The kafka-link microservice acts as a real-time message bridge between Unica Journey and Apache Kafka, allowing Journey to trigger Link flows and receive responses.
-
For Core Platform and Connectors:
The unicaIntegration settings allow Link to share persistence and configuration with an existing Unica deployment, making Link’s connectors available to the Unica suite.
- Unica Journey publishes a message to a Kafka topic.
- kafka-link consumes the message.
- kafka-link triggers the corresponding HCL Link flow (via the REST and Executor pods).
-
The Link flow processes the data and interacts with the external system.
-
(Optional) The Link flow sends a response back through kafka-link to Kafka for Unica Journey to consume.
Deployment Models
HCL Link can be deployed with Unica in two ways.
Link as a Subchart (Recommended)
-
The Unica Helm chart includes HCL Link as a dependency.
-
You can enable and configure Link directly in Unica’s values.yaml.
Example:link.enabled=true, link.unicaIntegration.enabled=true - This is the simplest method because it manages both components in a single Helm release.
-
Deploy the Unica chart first, then deploy the HCL Link chart as a separate release in the same namespace.
-
Use Unica-specific container images for Link.
-
Configure the unicaIntegration section in the Link values.yaml to connect it with the existing Unica deployment.
The examples in this guide use the Separate Charts model, as it requires additional configuration.
Official Documentation
For complete feature and capability details, refer to the official HCL Unica Link documentation:
HCL Unica Link 12.1.9 Documentation: https://help.hcl-software.com/unica/Link/en/12.1.9/index.html
This guide supplements the official documentation by focusing on Helm chart configurations required for successful integration.
kafka-link Service
The kafka-link service acts as a bridge between Unica Journey and Apache Kafka.
-
Purpose: Consumes and produces messages between Unica Journey and Kafka. Required only when using Unica Journey.
-
Configuration: Enable it using the kafkaLink section in the Link values.yaml.
kafkaLink:
deploy: true
image:
tag: "1.3.0.1-unica-fix-20250310112959"
kafkaBroker:
externalName: "10.134.69.243:9092"unicaIntegration Section
This section in the values.yaml is used only when deploying Link as a separate chart. It tells the Link deployment how to find and connect to the already-existing Unica deployment.
-
Function: It shares storage (for connectors) and configuration (for authentication) between the two separate Helm releases.
-
Configuration: All fields in this section are critical for the integration to work.
# In your values.yaml or passed via --set
unicaIntegration:
# Must be set to true to activate this integration logic
enabled: true
# The name of the existing Persistent Volume Claim (PVC)
# used by your Unica deployment (e.g., hcl-unica).
unicapvc: "hcl-unica"
# The Helm release name of your Unica deployment (e.g., hcl).
releaseName: "hcl"
initialUnicaLogin:
# The name of the Kubernetes secret that holds the
# Unica admin credentials (e.g., asm_admin).
secret: "hcl-unicaui-login-secret"Managing Unica Connectors
This integration enables Unica to use HCL Link’s connectors.
Connector Types
- HCL Unica Connectors: Pre-built connectors developed by HCL for Unica.
- Custom Connectors: Developed using the Link Connector Development Workshop (CDW). These can be shared with Unica when integrated.
Managing Connector Files
Currently, there is no UI or API for managing connector files. You must manually upload, update, or remove connector .zip archives from the shared persistent volume using commands such as kubectl cp, or by mounting the volume externally (for example via NFS).
Enabling Custom Connectors
ReadWriteMany (RWX) if shared with
Unica.# In your values.yaml
customConnectors:
enabled: true
persistence:
data:
existingClaim: ""
storageClass: ""
size: "10Gi"
# RWX is recommended if sharing this PV with Unica
accessMode: "ReadWriteMany"Restart Requirement
Changes to connector files on the persistent volume are not applied automatically.
You must restart the server, rest, and executor pods for the updates to take effect. The connectors are loaded when containers start.