Setting up persistent volumes on a high availability deployment (NFS)
Use these guidelines to help you set up persistent volumes for middleware infrastructure such as Solr/Zookeeper, MongoDB, Customizer, and Elasticsearch for a high availability deployment.
Requirements for persistent volumes
These guidelines and sample yml files describe how to set up all of the persistence volumes required for a full install of Component Pack. If you are using Starter Stack to install only some of the components, then you may not need all of the persistence volumes. See the Starter Stack page for more information on what persistent storage is required by individual components.
In an high availability configuration, best practice is to maintain persistent storage away from the ICp Masters themselves, on a separate machine that all ICp Masters can access.
- Boot Node – to execute Kubectl commands
- Storage Node – to store the persistent data (NFS)
Configuring the persistent volumes
On the storage node, run the following commands to create all necessary folders required for persistent volumes:
- The following commands create all necessary folders required for persistent
volumes:
sudo mkdir -p /pv-connections/mongo-node-{0,1,2}/data/db sudo mkdir -p /pv-connections/solr-data-solr-{0,1,2} sudo mkdir -p /pv-connections/zookeeper-data-zookeeper-{0,1,2} sudo mkdir -p /pv-connections/esdata-{0,1,2} sudo mkdir -p /pv-connections/esbackup sudo mkdir -p /pv-connections/customizations sudo chmod -R 777 /pv-connections
- To create a Kubernetes persistent volume on NFS, first discover the IP address of the storage
node:
hostname -i
- Perform the following steps on the storage node:
- Copy the NFS setup script and fullPVs_NFS.yml file to the storage node from
the extracted zip location. On the storage node, run the following
commands:
sudo mkdir -p $HOME/nfsSetup cd $HOME/nfsSetup sudo scp root@<IP Address of Boot Node>:/<extractedFolder>/microservices/hybridcloud/doc/samples/nfsSetup.sh . sudo scp root@<IP Address of Boot Node>:/<extractedFolder>/microservices/hybridcloud/doc/samples/fullPVs_NFS.yml .
- Replace the string ___NFS_SERVER_IP___ in the fullPVs_NFS.yml file with the
IP address of the NFS server by running the following command, replacing
<shareServerIpAddress> with the IP address of your storage
node:
sudo sed -i "s/___NFS_SERVER_IP___/<shareServerIpAddress>/g" $HOME/nfsSetup/fullPVs_NFS.yml
For example:sudo sed -i "s/___NFS_SERVER_IP___/1.2.3.4/g" $HOME/nfsSetup/fullPVs_NFS.yml
- The yml file sets up the NFS shares for the different folders created in
step 1. The script uses the path /pv-connections/mongo-node-0,
/pv-connections/mongo-node-1 and so on. If you used something other than
/pv-connections/ for your share location, for example,
/nfs/IBM/iccontainers/mongo-node-0, you must update the paths from
/pv-connections/ to /nfs/IBM/iccontainers/. To do that,
run the following
command:
sudo sed -i "s/\/pv-connections\//\/nfs\/IBM\/iccontainers\//g" $HOME/nfsSetup/fullPVs_NFS.yml
Note: Use\/
everywhere you want to look for or write in a /.For example, /nfs/IBM/iccontainers\/ becomes \/nfs\/IBM\/iccontainers\/ - Provide execution permission to nfsSetup.sh and run it in order to get NFS
installed and configured:
sudo chmod +x nfsSetup.sh sudo bash nfsSetup.sh
- Copy the NFS setup script and fullPVs_NFS.yml file to the storage node from
the extracted zip location. On the storage node, run the following
commands:
Perform the following steps on the boot node.
- Change
directory:
cd <extractedFolder>/microservices/hybridcloud/doc/samples
- Replace the string ___NFS_SERVER_IP___ in the
fullPVs_NFS.yml file with the IP address of the NFS server by running the
following command (replacing <shareServerIpAddress> with the IP address of
your storage node):
sudo sed -i "s/___NFS_SERVER_IP___/<shareServerIpAddress>/g" fullPVs_NFS.yml
- If you performed step 3.c in the previous section on the storage node, then you must complete
that step on the boot node as well: The yml file sets up the NFS shares for the different folders created in step 1 of the previous section. The script uses the path /pv-connections/mongo-node-0, /pv-connections/mongo-node-1 and so on. If you used something other than /pv-connections/ for your share location, for example, /nfs/IBM/iccontainers/mongo-node-0, you must update the paths from /pv-connections/ to /nfs/IBM/iccontainers/. To do that, run the following command:
sudo sed -i "s/\/pv-connections\//\/nfs\/IBM\/iccontainers\//g" $HOME/nfsSetup/fullPVs_NFS.yml
Note: Use\/
everywhere you want to look for or write in a / (forward slash). For example, /nfs/IBM/iccontainers\/ becomes \/nfs\/IBM\/iccontainers\/ - Validate the NFS mount and write permissions as follows:
- Test the NFS mount and write permissions by running the following
script:
sudo bash validatePV_NFS_YAML.sh fullPVs_NFS.yml
- Copy the validation script and the yml file to an existing directory on all
of the nodes in your deployment (master/boot, proxy, and all
workers):
sudo scp validatePV_NFS_YAML.sh root@IP_Address_of_Node:/some/remote/directory sudo scp fullPVs_NFS.yml root@IP_Address_of_Node:/some/remote/directory
- Log in it each of the nodes in your deployment, and run the validation
script:
sudo cd /some/remote/directory sudo bash validatePV_NFS_YAML.sh fullPVs_NFS.yml
Only continue to the next step when you see the message "NFS mount and write permissions tests passed".
- Test the NFS mount and write permissions by running the following
script:
- Create the persistent volumes on Kubernetes with the following
command:
sudo /usr/local/bin/kubectl create -f fullPVs_NFS.yml
- Create Kubernetes persistent volume claims:
sudo /usr/local/bin/kubectl create -f fullPVCs.yml
Note: You can find a sample fullPVCs.yml in the install ZIP file at the following extracted location: <extractedFolder>/microservices/hybridcloud/doc/samples/ - Verify that both the persistent volumes and persistent volume claims are created successfully.
- Ensure the status bound is listed after running the following command on the boot
node :
sudo /usr/local/bin/kubectl get pv,pvc -n connections
- Open a browser to the IBM Cloud Private dashboard; for example:
https://master_HA_vip:8443/#/dashboard.
The Shared Storage shows 102 GiB.
- Click Bound . You should see the list of NFS shares you created, with a status of
- Ensure the status bound is listed after running the following command on the boot
node :