OpenTelemetry Setup

The OpenTelemetry Collector receives telemetry data from Unica services (via JMX exporters or application agents) and forwards it to Prometheus, Tempo, or Loki as per the setup configured. The steps for OpenTelemetry setup are as follows:

Procedure

  1. Deploy the OpenTelemetry Collector using Helm or manifests.
  2. Configure the collector to receive OTLP data and export metrics to Prometheus.
  3. Validate that Unica’s exporters or agents are sending telemetry data.

Installing OpenTelemetry Collector

Procedure

  1. Create a namespace for monitoring:
    kubectl create namespace <monitoring_namespace>
  2. Apply the OpenTelemetry Collector configuration:
    kubectl apply -f otel.yaml -n <monitoring_namespace>
    A sample otel.yaml file is shown below:
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: otel-collector-conf
      labels:
        app: opentelemetry
        component: otel-collector-conf
    data:
      otel-collector-config: |
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: 0.0.0.0:4317
                #max_recv_msg_size_mib: 887772160 
                # Default is 4 MiB; increase as needed ${env:MY_POD_IP}
              http:
                endpoint: 0.0.0.0:4318
          jaeger:
            protocols:
              grpc:
                endpoint: ${env:MY_POD_IP}:14250
              thrift_http:
                endpoint: ${env:MY_POD_IP}:14268
              thrift_compact:
                endpoint: ${env:MY_POD_IP}:6831            
                
          prometheus:
            config:
              scrape_configs:
              - job_name: 'otel-node-exporter'
                scrape_interval: 10s
                static_configs:
                - targets: ['loki-prometheus-node-exporter. <monitoring_namespace>.svc.cluster.local:9100']
              - job_name: 'otel-kube-state-metrics'
                scrape_interval: 10s
                static_configs:
                - targets: ['loki-kube-state-metrics.<monitoring_namespace>.svc.cluster.local:8080']
              - job_name: 'prometheus-pushgateway'
                scrape_interval: 10s
                static_configs:
                - targets:
                  - loki-prometheus-pushgateway.<monitoring_namespace>.svc.cluster.local:9091          
        processors:
          attributes:
            actions:
              - key: user.id
                action: insert
              - key: operation
                action: insert
          batch:
          memory_limiter:
            # 80% of maximum memory up to 2G
            limit_mib: 1500
            # 25% of limit up to 2G
            spike_limit_mib: 512
            check_interval: 10s
        extensions:
          zpages: {}
        exporters:
          otlp:
            endpoint: "http://otel-collector.<monitoring_namespace>.svc.cluster.local:4317"
            tls:
              insecure: true
          otlphttp:
            endpoint: "http://tempo.<monitoring_namespace>.svc.cluster.local:4318"
            tls:
              insecure: true
            timeout: 10s
            compression: gzip
            sending_queue:
              enabled: true
            retry_on_failure:
              enabled: true
          otlphttp/logs:
            endpoint: "http://loki.<monitoring_namespace>.svc.cluster.local:3100/otlp" #loki
            tls:
              insecure: true
          prometheusremotewrite:
            endpoint: http://loki-prometheus-server.<monitoring_namespace>.svc.cluster.local:9090/api/v1/write
        service:
          extensions: [zpages]
          pipelines:
            traces/1:
              receivers: [otlp,jaeger]
              processors: [memory_limiter, batch]
              exporters: [otlphttp,otlp/2]
            metrics:
              receivers: [otlp]
              processors: [batch]
              exporters: [prometheusremotewrite]
            logs:
              receivers: [otlp]
              processors: [batch, memory_limiter]
              exporters: [otlphttp/logs]
          telemetry:
            metrics:
              address: ${env:MY_POD_IP}:8888          
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: otel-collector
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      ports:
      - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver.
        port: 4317
        protocol: TCP
        targetPort: 4317
      - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.
        port: 4318
        protocol: TCP
        targetPort: 4318
      - name: metrics # Default endpoint for querying metrics.
        port: 8888
      selector:
        component: otel-collector
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: otel-collector
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      selector:
        matchLabels:
          app: opentelemetry
          component: otel-collector
      minReadySeconds: 5
      progressDeadlineSeconds: 120
      replicas: 1 #TODO - adjust this to your own requirements
      template:
        metadata:
          labels:
            app: opentelemetry
            component: otel-collector
        spec:
          containers:
          - command:
              - "/otelcol"
              - "--config=/conf/otel-collector-config.yaml"
            image: otel/opentelemetry-collector:latest
            name: otel-collector
            resources:
              limits:
                cpu: 1
                memory: 2Gi
              requests:
                cpu: 200m
                memory: 400Mi
            ports:
            - containerPort: 55679 # Default endpoint for ZPages.
            - containerPort: 4317 # Default endpoint for OpenTelemetry receiver.
            - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver.
            - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
            - containerPort: 9411 # Default endpoint for Zipkin receiver.
            - containerPort: 8888  # Default endpoint for querying metrics.
            env:
              - name: MY_POD_IP
                valueFrom:
                  fieldRef:
                    apiVersion: v1
                    fieldPath: status.podIP
              - name: GOMEMLIMIT
                value: 1600MiB
            volumeMounts:
            - name: otel-collector-config-vol
              mountPath: /conf
    #        - name: otel-collector-secrets
    #          mountPath: /secrets
          volumes:
            - configMap:
                name: otel-collector-conf
                items:
                  - key: otel-collector-config
                    path: otel-collector-config.yaml
              name: otel-collector-config-vol
  3. You can integrate the OpenTelemetry Java Agent (opentelemetry-javaagent-2.11.0.jar, Apache License 2.0) with each Unica product container to export telemetry data.

Configuration Approach

Procedure

  1. Update the following helm chart parameters. These parameters allow configuration of OTel-specific environment variables and endpoints for each product.
    • PRODUCT_OPTS_PLATFORM
    • PRODUCT_OPTS_CAMPAIGN
  2. Alternatively, maintain the setenv.sh file on a Persistent Volume (PV). You can copy it at runtime during pod startup using the custom COMMANDS and SCRIPTS placeholders provided in the Helm charts for each product.
  3. Once the pod starts, validate the OTel configuration using the following commands:

    kubectl exec -it <platform-pod-name> -- bash

    cd /docker/unica/Tomcat_platform/apache/bin

    vi setenv.sh

    Example configuration:
    export JAVA_HOME=/docker/unica/jre
    export PATH=/docker/unica/jre/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/docker/unica
    
    export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=http://otel-collector.monitoring.svc.cluster.local:4318/v1/logs
    export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=http://otel-collector.monitoring.svc.cluster.local:4318
    export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://otel-collector.monitoring.svc.cluster.local:4318/v1/traces
    
    export JAVA_OPTS="-javaagent:/docker/unica/opentelemetry-javaagent-2.11.0.jar \
    -Dotel.metrics.exporter.prometheus.port=80 \
    -Dotel.exporter.otlp.protocol=grpc \
    -Dotel.exporter.otlp.protocol=http \
    -Dotel.javaagent.debug=true \
    -Dotel.exporter.otlp.grpc.max_reconnect_delay=10s \
    -Dotel.exporter.otlp.grpc.max_receive_message_size=16777216 \
    -Dotel.exporter.otlp.grpc.max_send_message_size=16777216 \
    -Dotel.exporter.otlp.endpoint=http://otel-collector.monitoring.svc.cluster.local:4317 \
    -Dotel.metrics.exporter.prometheus.host=hcl-prometheus-server.monitoring.svc.cluster.local \
    -Xms1024m -Xmx2048m \
    -DUNICA_PLATFORM_CACHE_ENABLED=false \
    -Dfile.encoding=UTF-8 \
    -DENABLE_NON_PROD_MODE=true \
    -Dclient.encoding.override=UTF-8 \
    -Dplatform.home=/docker/unica/Platform"
    
    Otel Dashboard