Application Configuration

This section outlines key configuration parameters within the HCL Detect application that can be tuned to optimize performance.

By adjusting batch sizes and enabling efficient data streaming, you can enhance throughput, reduce latency, and ensure smoother operation under heavy loads. The recommended values act as a baseline, and it is recommended to fine-tune based on the specific demands of your environment and available system resources.

Campaign Actuator Settings

In the Campaign Actuator setting, adjust the batch sizes to balance throughput and resource consumption effectively.

"campaignActuator": {
    "contactPolicyCheckerSettings": {
        "batchSize": 5,
        "perUserContactPolicy": {
            "countBasedPolicies": [
                {
                    "countThreshold": 50,
                    "timeUnit": "Days",
                    "windowSize": 7
                }
            ]
        }
    },
    "deduplicatorFsyncBatchSize": 1000,
    "feedDataKafkaSourceSettings": {
        "batchSize": 1000
    },
    "logLevel": "INFO",
    "mergerSettings": {
        "inputBufferSize": 1000,
        "maxBlockingTimeInMillis": 500
    },
    "name": "Campaign Actuator",
    "numParallelChannels": 2,
    "responseMessageKafkaSourceSettings": {
        "batchSize": 1000
    },
    "triggerEvaluatorSettings": {
        "batchSize": 500,
        "maxAllowedTimeLagInSeconds": 86400,
        "stateExpirationCheckIntervalInMillis": 5000
    },
    "useKafkaToProcessActuation": true
}
  • contactPolicyCheckerSettings.batchSize: set the batch size to 5 to ensure optimal performance on the batch events to be executed.
  • triggerEvaluatorSettings.batchSize: set the batch size to 500 to ensure optimal performance while performing the number of triggers evaluated per batch.
  • useKafkaToProcessActuation: Set to true to leverage Kafka for scalable and decoupled processing.

Kafka Settings

Kafka is used extensively within HCL Detect to handle real-time data streaming between components. This section provides recommendations for configuring Kafka source and sink batch sizes to optimize data ingestion and processing throughput. These values should be adjusted based on message volume, system capacity, and latency requirements.

Proper Kafka tuning helps ensure reliable ingestion and emission of high-volume message streams.

 "kafkaSinkBatchSize": 500,
"kafkaSourceSettings": {
    "batchSize": 500,
    "epochSize": 500,
    "topicName": "RECHARGE"
}
  • kafkaSinkBatchSize: Set the batch size to 500 to batch 500 records together when publishing to Kafka for better memory consumption.
  • kafkaSourceSettings.batchSize: Set the batch size to 500 to fetch 500 records from Kafka in one pull for optimal performance.
  • kafkaSourceSettings.epochSize: Set the epoch size with 500 to commit 500 records in kafka topics in single batch after reading them.