Manual Backup and Restore Procedures for Link
This section explains how to manually back up and restore Link application data in a Kubernetes environment.
Backup Procedures
Follow these steps to create a complete backup of your Link configuration and persistent data.
Prerequisites-
You must have kubectl access to the Kubernetes cluster.
-
All Link pods (Server, REST, Executor, Kafka, etc.) must be in the Running state.
If any pod is not running, data copy operations may fail.
Link components store persistent data in the following PersistentVolumeClaims (PVCs). You will need to back up the data from each of these.
- Core components: Rest, Server, Executor, Kafka, Custom-connector, Unica (for HCH data)
- Dependent components (if enabled): Redis, MongoDB
Steps to create a complete backup of your Link configuration:
Step 1: Back Up Volume Configurations (YAML)This step backs up the definitions of your PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). This is useful for reference or for recreating the storage structure in a disaster recovery scenario.
kubectl get pvc -n <namespace>
kubectl get pv -n <namespace># Example for one PVC
kubectl get pvc <pvc-name> -n <namespace> -o yaml > <pvc-name>-backup.yaml
# Example for one PV
kubectl get pv <pv-name> -o yaml > <pv-name>-backup.yamlStep 2: Back Up Application Data (File Content)
This step copies the actual data from the running pods to your local filesystem.
Prepare Redis for Backup
- Get the name of your Redis
pod:
kubectl get pods -n <namespace> | grep redis - Execute the SAVE command inside the Redis
pod:
kubectl exec -it <redis-pod-name> -n <namespace> -- redis-cli SAVE - You should see an OK response.
Copy Data from Pods to Local Machine
Run the following commands to copy the data directories from each pod to your local machine. Replace placeholders like <pod-name> and <namespace> as needed.
kubectl cp <mongodb-pod-name>:/bitnami/mongodb/ ./mongodb-backup -n <namespace>kubectl cp <redis-pod-name>:/data ./redis-backup -n <namespace>kubectl cp <server-pod-name>:/opt/data/files ./server-backup-files -n <namespace>
kubectl cp <server-pod-name>:/data ./server-backup-data -n <namespace>kubectl cp <rest-pod-name>:/data ./rest-backup -n <namespace>kubectl cp <rest-pod-name>:/opt/custom/connectors ./connectors-backup -n <namespace>kubectl cp <rest-pod-name>:/mnt/HIPData ./hch-backup -n <namespace>kubectl cp <kafka-pod-name>:/data ./kafka-backup -n <namespace>At this stage, a complete backup of the Link data should be available on the local machine.
Restore Procedures
These steps describe restoring your backed-up data into a fresh installation of Link. This is the recommended approach to ensure a clean and stable environment.
Step 1: Deploy a Fresh Link Instance
Before restoring data, you must deploy a new, running instance of Link.
- Ensure your old Link deployment is scaled down or removed.
- (If necessary) Manually create new, empty PVs and PVCs for Link to use.
- Modify your values.yaml to use these new (or dynamically provisioned) empty PVCs.
- Deploy Link using
Helm:
helm upgrade --install <release-name> <chart-path> -f values.yaml -n <namespace> - Verify that all pods start successfully with fresh, empty data. This confirms the new environment is healthy before you introduce old data.
Step 2: Copy Backed-Up Data to Pods
kubectl cp ./mongodb-backup/ <mongodb-pod-name>:/bitnami/mongodb/ -n <namespace>kubectl cp ./redis-backup/ <redis-pod-name>:/data -n <namespace>kubectl cp ./server-backup-files/ <server-pod-name>:/opt/data/files -n <namespace>
kubectl cp ./server-backup-data/ <server-pod-name>:/data -n <namespace>kubectl cp ./rest-backup/ <rest-pod-name>:/data -n <namespace>kubectl cp ./connectors-backup/ <rest-pod-name>:/opt/custom/connectors -n <namespace>kubectl cp ./hch-backup/ <rest-pod-name>:/mnt/HIPData -n <namespace>kubectl cp ./kafka-backup/ <kafka-pod-name>:/data -n <namespace>Step 3: Restart Pods to Load Data
# Example for Deployments (like Server, Rest)
kubectl rollout restart deployment <deployment-name> -n <namespace>
# Example for StatefulSets (like MongoDB, Kafka, Redis)
kubectl rollout restart statefulset <statefulset-name> -n <namespace>After the pods restart, your Link application should be running with the restored data.