FAQ

This section answers frequently asked questions about deploying and managing HCL Link in a cloud-native environment.

Unica Custom Connectors

Q: Are “Custom Unica Connectors” supported only in Unica, or can they also be used in standalone Link?

A: They can technically be used in standalone Link, but they are designed for the Unica suite. Some features in Unica connectors are specific to Unica integration and may not function outside the Unica environment.

Q: What is the difference between Custom Unica Connectors and connectors created in standalone Link using CDW?

A: There is no technical difference. Both are built using the same Connector Development Workshop (CDW). However, Unica connectors are tailored for use with Unica and may include Unica-specific features.

Q: Is there a REST API, CLI, or Web UI tool to manage custom connectors (create, update, delete)?

A: No. Connector .zip archives must be managed manually at the file system level. You can use direct file access (for example, an NFS client) or Kubernetes commands such as kubectl cp and kubectl rm.

Q: Can the same Persistent Volume (PV) be shared between the server, rest, and executor pods so all see connector changes?

A: Yes, if the storage class for the PV supports ReadWriteMany (RWX) access.

Q: Do Link or Unica automatically detect connector changes, or is a restart required?

A: A restart is required. Connector files are copied from the persistent volume into each container’s file system (under /opt/runtime/modules) when containers start.

Q: Where are Unica custom connector settings defined in the Link chart?

A: In the values.yaml file under the customConnectors section.
# Custom connectors parameters
customConnectors:
  enabled: false
  persistence:
    data:
      existingClaim: ""
      useDefaultStorageClass: true
      storageClass: ""
      size: "10Gi"
      accessMode: "ReadWriteOnce"

The corresponding template file is templates/custom-connectors-pvc.yaml (do not edit manually).

Unica Partitioning

Q: Does Unica partitioning provide load balancing or fault tolerance?

A: No. The main purpose of Unica partitioning is to separate users and their data for regulatory compliance, typically by geographic region.

Q: Where are Unica partitioning settings defined in the Link chart?

A: In the values.yaml file under the unicaIntegration section.
unicaIntegration:
  ...
  initialPartitionName: "partition1"

You can also find references to initialPartitionName in server-configmap.yaml (do not edit manually). These values are used when generating campaign.properties and journey.properties for Unica.

Multiple or External Kafka Brokers (Unica Journey)

Q: Is kafka-link used with Journey or Campaign?

A: Only with Journey.

Q: Does kafka-link both produce and consume messages from Kafka?

A: Yes, it does both.

Q: If multiple Kafka brokers are listed, do all need to be available when kafka-link starts?

A: No. kafka-link will try each broker in the list until a connection is established. Once connected, brokers share client information automatically.

Q: Where are external Kafka brokers defined in the Link chart?

A: In the values.yaml file under the kafkaLink.kafkaBroker section.
# Kafka-Link service parameters
kafkaLink:
  ...
  kafkaBroker:
    ...
    name: kafka-0.kafka
    port: 9092
    externalName: ""

You can specify multiple brokers using the externalName field as a semicolon-separated list:

broker1:9092;broker2:9093;broker3:9094

Kubernetes Ingress

Q: When defining Ingress with path-based routing, why doesn’t a path like /client work?

A: Update the Ingress definition to use a regular expression and ensure the base route matches.
path: /client/(.*)
Also, in the client deployment, set the base route to match the Ingress path:
--set client.inbound.baseRoute="/client"

Q: Is only the NGINX Ingress Controller supported for Link?

A: No. Any Kubernetes-compliant Ingress controller can be used. Link itself is unaware of the Ingress configuration.

For example, the SoFy platform uses Emissary-Ingress as its API gateway.

Red Hat OpenShift (RHOS)

Q: Can Link be accessed through RHOS routes?

A: Yes. You can define Route objects for the client, server, and rest services.

Q: Can routes be created automatically during chart installation?

A: Yes. Configuration sections are available in values.yaml under route:client, route:server, and route:rest.

The corresponding templates are client-route.yaml, server-route.yaml, and rest-route.yaml.

Alternatively, you can create and manage routes separately from the chart using the Red Hat OpenShift web console.

AWS EKS Fargate

Q: Can Link be deployed on AWS ECS Fargate?

A: No. Link supports only AWS EKS (Kubernetes-compliant) clusters, not ECS.

Q: What are the high-level steps for installing Link on EKS Fargate?

A:

  1. Create an EKS cluster with a Fargate profile (default name: fp-default).

  2. Create an EFS (Elastic File System) instance in the same VPC as the EKS cluster.

  3. Create a StorageClass using the EFS provisioner.

  4. Create Persistent Volumes (PV) and Persistent Volume Claims (PVC) using the EFS storage.

    • One PV/PVC for MongoDB (if deployed via subchart).

    • One PV/PVC for Redis (if deployed via subchart).

    • One PV/PVC for “files” storage (Link server).

    • One PV/PVC for “data” storage shared by server, rest, and executor pods.

  5. Install the Link Helm chart and override the values to use the pre-provisioned PVCs.