Deploying ZIETrans application with scaling

Introduction

Scaling applications involves enhancing an application's capacity and capabilities to handle increased workloads and serve more requests without sacrificing quality or performance. This can be achieved through vertical scaling (upgrading existing resources) or horizontal scaling (adding more resources in parallel). Scaling is crucial for maintaining high availability, even during periods of heightened application traffic. However, this process also introduces new challenges, particularly in two critical areas: data synchronization and session state management.

In this section, we cover the challenges when scaling the ZIETrans application, strategies to mitigate these challenges, and guidelines for plannig and configuring the ZIETrans application scaled deployment.

Challenges

Statefulness of the mainframe connection:

The statefulness of the mainframe connection poses a challenge as it's crucial to maintain session stickiness. Without proper session stickiness, requests from the same session may be distributed to multiple replicas, potentially causing session disconnection errors.

Solution : To address this challenge, the recommended approach is to enable a Load Balancer with Session Affinity. By enabling Session Affinity (also known as session stickiness or sticky sessions) on the Load Balancer, the system ensures that requests with the same sessionID are directed to the same backend instance. This is achieved by including a cookie header with the sessionID in the HTTP requests.

To achieve session stickiness we have to enable Ingress in HELM chart. Follow below steps to enable Ingress.

  1. Set the Ingress value to true in Values.yaml file. Add the required annotations and host details as below.

ingress: 
 enabled: true 
 className: nginx
 annotations:
   nginx.ingress.kubernetes.io/affinity: cookie
   nginx.ingress.kubernetes.io/session-cookie-name: http-cookie
   nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
   nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
 hosts:
   - host: kubernetes.docker.internal
     paths:
       - path: /
         pathType: Prefix
  1. Set the service to ClusterIP and port to 9080 in values.yaml file.

service:
 type: ClusterIP
 port: 9080
  1. Add Dependency in Chart.yaml file as mentioned below.

dependencies: 
- name: nginx-ingress-controller
  repository: https://charts.bitnami.com/bitnami
  version:  9.3.23
  1. Update the dependency of Ingress using below command and install the helm chart.

    helm dependency update   
  2. Access the ZIETrans application using below link: http://kubernates.docker.internal/application-name

Maintaining Session Stickiness for ZIETrans Chained IOs:

Ensuring session stickiness is crucial for ZIETrans chained Input/Output (IO) operations. It guarantees that all API requests within the chain are directed to the same instance where the first API in the chain executed. Failing to maintain session stickiness can result in API failures with "connection not found" errors.

API Calls from Browser Clients:

In web applications, web browsers typically manage session cookies and headers automatically for consecutive calls. However, if this isn't the case, it's essential to ensure that the affinity header used in the first request is consistently set in consecutive APIs within the chain.

API Calls from Non-Browser Clients (e.g., Java or Backend Clients):

When invoking APIs from non-browser-based clients, such as Java applications or other backend clients, the approach is similar. Just like in the browser scenario, it's crucial to set the affinity header in the first request and continue to include it in all subsequent APIs within the chain.

Data Syncronization and Admin console access:

To overcome the ZIETrans license, Userlist, and Admin console data synching problems, user must refer to this section and follow mentioned steps.

Introduction The ZIETrans administrative interface can be used to manage connections and perform problem determination for ZIETrans Web applications. For more details about configuring Remote Admin support for liberty server refer to link for Remote Admin liberty .

Steps to be followed When user enable scaling in Kubernetes multiple instances of same applications will be deployed in different pods. To fetch all the details of ZIETrans application from all the pods user must configure ZIETrans application as below.

  • Create single ZIETrans application or multiple ZIETrans applications.

  • Add all the jars present in plugin location : zietrans-3.0.0-oxygen-dist-signed-win32-x86_64\ZIETrans\plugins\com.ibm.hats.core_3.0.0.XXXXXXXXXXXXXX\lib\adminCloud to java build path of ZIETrans application, and also copy them in ZIETrans EAR folder.

  • Now, navigate to the manifest file and enable the jars.

  • Export the ZIETrans projects as ear.

  • Complete details about Docker file and HELM chart are available here.

  • To support remote admin for liberty server, JVM options and server.xml are added in the Docker file.

  • updatepod.sh file is added in the docker file which updates the POD IP in the below jvm properties in jvm.options file:

    -Djava.rmi.server.hostname=$POD_IP
  • Make sure to give appropriate port for JMX. The same port will be used for remote admin in Management scope of Admin Console.

    Example:

    -Dcom.sun.management.jmxremote.port=8888
  • The copyruntime.sh file copies the runtime.properties file to specific location as mentioned in environment variable path ZIETRANS_RUNTIME_CFG_ENV.

  • The wrapper.sh file in the docker runs the copyruntime.sh and updatepod.sh, before running the liberty server.

  • In the above chart make sure to enable environment variable ZIETRANS_SCALED_ENV to true in env-configmap.yaml .

    ZIETRANS_SCALED_ENV: "true"
  • If the ZIETRANS_SCALED_ENV is set to false, it may not fetch all the details of connections from all the pods and the license, userlist and admin console data will be inaccurate. It will provide the details of the connected pod.

  • For the ZIETrans application which is deployed in different pods, user must make sure that all pods access same runtime.properties file. To achieve this, user must use environment variable as below in env-configmap.yaml file inside the HELM chart.

    Example :

    ZIETRANS_RUNTIME_CFG_ENV: /home/runtime
  • If user wants to save the runtime.properties files or log files outside the ear, user must configure PV and PVC in HELM chart. In the above provided chart, user must provide the preffered location in the pv-claim.yaml.

  • By following above steps, user should be able to manage and get the details of all the applications present in different pods.