Configuring Control to use NFSv3 storage for performance testing

This topic describes how to configure the Control component to use NFSv3 storage for performance testing, including setup, deployment, and verification steps.

Before you begin

The following prerequisites are recommended before you perform:
  • A working NFS provisioner (for example, nfs-client StorageClass).
  • A Kubernetes cluster with ReadWriteMany (RWX) support.
  • Helm installed.

Procedure

  1. Create a dedicated StorageClass for Control (NFSv3)

    Run the following commands to create a new StorageClass that uses NFSv3:

    Bash
    # Get working provisioner from existing nfs-client storage class
    PROV=$(kubectl get sc nfs-client -o jsonpath='{.provisioner}')
    cat <<EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: control-nfs-v3
    annotations:
    storageclass.kubernetes.io/is-default-class: "false"
    provisioner: ${PROV}
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    mountOptions:
    - nfsvers=3
    parameters:
    archiveOnDelete: "false"
    EOF
  2. Pass StorageClass to Control from parent chart
    Choose one of the following options (either a or b):
    1. Configure in values.yaml file
      yaml
      control:
       enabled: false
       persistence:
        storageClass: control-nfs-v3
        accessModes:
         - ReadWriteMany
    2. Deploy from parent chart
      bash
      helm upgrade --install devops-loop . \
       -n devops-loop --create-namespace \
       -f values.yaml \
       --set control.enabled=true \
       --set control.gitea.config.indexer.REPO_INDEXER_SKIP_TLS_VERIFY=true \
       --set control.persistence.storageClass=control-nfs-v3
      
  3. Update existing Control PVC (if applicable)
    The storageClassName field is immutable for existing PVCs. Use one of the following approaches:
    • If data retention is not required
      Delete the existing Control PVC and then rerun deploy command:
      bash
      kubectl delete pvc devops-loop-control-shared-storage -n devops-loop
    Note: This action permanently deletes existing data. Ensure that backups are taken before proceeding.
    • If data retention is required
      1. Create a new PVC using the control-nfs-v3 StorageClass (for example, devops-loop-control-sharedstorage-v3).

      2. Scale the Control deployment to 0 to stop write operations.
      3. Copy data from the old PVC to the new PVC by using a temporary migration pod.
      4. Update the parent values for Control:
        • control.persistence.claimName: devops-loop-control-shared-storage-v3
        • control.persistence.create: false
        • control.persistence.mount: true
      5. Run helm upgrade --install … again.
  4. Verifying the configuration
    Run the following commands to verify the configuration:
    bash
    kubectl get pvc -n devops-loop | grep control
    kubectl exec -n devops-loop <control-pod-name> -- sh -c "cat /proc/mounts | grep nfs"
    

Results

  • The PVC shows control-nfs-v3 as the StorageClass.
  • The mount output includes vers=3 and mountvers=3.