Preparing Storage for Link
About this task
When you install the Link chart, it creates four Kubernetes deployments: lnk-client, lnk-server, lnk-rest, and lnk-executor.
-
lnk-client does not need any persistent storage.
-
lnk-server requires two persistent volumes (PVs) and the mount point in the container file system for the following is:
- /opt/data/hipfiles for files
- /data for data
-
lnk-rest and lnk-executor require only the data volume, which they also mount at /data. All three deployments (lnk-server, lnk-rest, and lnk-executor) must share the same storage for the /data mount point.
When MongoDB and Redis charts are enabled as subcharts, two additional deployments are created:
- link-mongodb (requires a persistent volume at
/bitnami/mongodb) - link-redis (requires a persistent volume for key/value data)
Procedure
-
By default, Redis uses /data as its mount point. To avoid
conflicts with the /data path used by other deployments,
update the Redis configuration using:
redis.master.persistence.path=/bitnami/redisThis change also makes the Redis mount point consistent with MongoDB’s /bitnami/mongodb. -
Because AWS Fargate does not support dynamic provisioning,
you must manually create all four PVs and their corresponding
Persistent Volume Claims (PVCs).
Each PV should have its own EFS access point to isolate deployments and allow lnk-server to mount both data and file volumes.
- You can specify a 5Gi size for all PVs and PVCs (this is required by Kubernetes definitions), but EFS automatically scales based on demand, so the size value is not enforced.
-
When creating EFS access points, set:
- UID: 1000
- GID: 1000
- Permissions: 777
These settings will require specific overrides during the Link chart installation.
-
Before creating PVs, PVCs, and EFS access points, register a
StorageClass in the cluster to use the EFS CSI
driver. Run the following command:
echo " kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com " | kubectl apply -f -