Labeling and tainting worker nodes for Elasticsearch
Use Kubernetes to label and taint worker nodes so that they are reserved for use by the Elasticsearch offering of Component Pack for IBM Connections™. Skip this topic if you are using an "All-on-one" box deployment for a proof of concept.
Before you begin
kubectl drain node --force --delete-local-data --ignore-daemonsets
About this task
For best results, deploy dedicated worker nodes that will host only the Elasticsearch pods. In production, best practice is to deploy three dedicated worker nodes for Elasticsearch to make use of the pod anti-affinity rules and create a highly available worker solution. If you plan to also install Orient Me or Customizer, then you should deploy separate worker nodes to host the pods belonging to those services. Labeling and tainting the dedicated worker nodes ensures that they can only be used by Elasticsearch.
Procedure
-
Determine which nodes will be dedicated for use by Elasticsearch.
You can view a list of all nodes in your cluster by running the following command:
kubectl get nodes
-
Label and taint one node by running the following commands on the master node, replacing
node with the node you wish to use as a dedicated Elasticsearch worker:
kubectl label nodes node type=infrastructure --overwrite kubectl taint nodes node dedicated=infrastructure:NoSchedule --overwrite
- Repeat the step 2 for every node that you wish to use as a dedicated Elasticsearch worker.
-
If you ran the
kubectl drain
command to drain pods off the node, then run the following command to allow pods to run on that node again:Important: Do not run this command until you have completed the labeling and tainting of dedicated Elasticsearch nodes.kubectl uncordon node
What to do next
If you later decide to remove the taint and label from a node, you can run the following commands:
kubectl taint nodes node dedicated:NoSchedule-
kubectl label node node type-