Deploying a non-HA Kubernetes platform
This guide describes how to use the kubeadm tool for setting up a pilot, non-HA Kubernetes infrastructure to host all of IBM Connections™ Component Pack services (Orient Me, Customizer, and Elasticsearch).
Before you begin
- Three servers running RHEL 7.6 or CentOS 7.6 (or later). See Component Pack installation roadmap and requirements for specifications.
Server 1 will act as master, server 2 will be a generic worker, and server 3 will be an infrastructure worker (hosting only Elasticsearch pods).
Note: A reverse proxy server is also required for Customizer. This server will be outside of the Kubernetes cluster. See Configuring the NGINX proxy server for Customizer. - Yum must be working on all servers to install the required packages.
- You must have sudo privileges on all servers.
- Full network connectivity must be configured between all servers in the cluster (can use either a public network or a private network).
- Each server must have a unique host name, MAC address, and product_uuid; see the "Verify the MAC address and product_uuid are unique for every node" section of the Kubernetes installation documentation.
- Required ports must be open on the servers, as described in "Checking the ports", in the Kubernetes installation documentation.
About this task
This guide is for a pilot deployment only and is not recommended for production as there is no support for high availability.
Procedure
-
Install Docker on each server.
- If you are a Docker CE customer, it is recommended that you install/upgrade to 18.06.2+. This is due to the runc vulnerability: CVE-2019-5736 .
- If you are a Docker EE customer, it is recommended that you install/remain on 17.03.x.
Installing 18.06 CE (recommended):
The following commands will install Docker 18.06 CE and will disable the yum docker repository so that Docker will not be updated.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum-config-manager --disable docker* yum-config-manager --enable docker-ce-stable yum install -y --setopt=obsoletes=0 docker-ce-18.06* yum makecache fast sudo systemctl start docker sudo systemctl enable docker.service yum-config-manager --disable docker*
NB: If docker does not start, execute the command 'rm -rf /var/run/docker.sock' and rerun the command: 'sudo systemctl start docker'
Installing 17.03 (CE or EE):
The following commands will install Docker 17.03 CE and will disable the yum docker repository so that Docker will not be updated. If you have a Docker EE license, follow the instructions that come with that product instead.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum-config-manager --disable docker* yum-config-manager --enable docker-ce-stable yum install -y --setopt=obsoletes=0 docker-ce-17.03* yum makecache fast sudo systemctl start docker sudo systemctl enable docker.service yum-config-manager --disable docker*
NB: If docker does not start, execute the command 'rm -rf /var/run/docker.sock' and rerun the command: 'sudo systemctl start docker'
-
Run the following commands on each of the Docker servers to ensure the required
sysctl knob
setting, which controls kernel behavior, is set: (17.03 only - set by default with 18.06)echo 1 > /proc/sys/fs/may_detach_mounts echo fs.may_detach_mounts=1 > /usr/lib/sysctl.d/99-docker.conf
-
Configure Docker with the devicemapper storage driver.
- Method: direct-lvmAttention: The direct-lvm method is required for a non-HA deployment that is not being installed all-on-one-server as a proof-of-concept. The following instructions are for configuring with direct-lvm.
Device Mapper is a kernel-based framework that underpins many advanced volume management technologies on Linux®. Docker’s devicemapper storage driver leverages the thin provisioning and snapshot capabilities of this framework for image and container management.
Production hosts using the devicemapper storage driver must use direct-lvm mode. This mode uses block devices to create the thin pool. This is faster than using loopback devices, uses system resources more efficiently, and block devices can grow as needed.
The following steps create a logical volume configured as a thin pool to use as backing for the storage pool. It assumes that you have a spare block device at /dev/xvdf.
- Identify the block device you want to use.
The device is located under /dev/ (for example, /dev/xvdf) and needs enough free space to store the images and container layers for the workloads that the host runs. A solid state drive is ideal.
- Stop Docker by running the following command:
sudo systemctl stop docker
- Install the following packages:
RHEL / CentOS: device-mapper-persistent-data, lvm2, and all dependencies
- Create a physical volume on your block device from step 1, using the
pvcreate
command and substituting your device name for /dev/xvdf.Important: The next few steps are destructive, so be sure that you have specified the correct device.sudo pvcreate /dev/xvdf
- Create a docker volume group on the same device, using the vgcreate
command:
vgcreate docker /dev/xvdf
- Create two logical volumes named
thinpool
andthinpoolmeta
using thelvcreate
command.The last parameter specifies the amount of free space to allow for automatic expanding of the data or metadata if space runs low, as a temporary stop-gap. These are the recommended values.sudo lvcreate --wipesignatures y -n thinpool docker -l 95%VG sudo lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG
- Convert the volumes to a thin pool and a storage location for metadata for the thin pool, using
the
lvconvert
command:sudo lvconvert -y \ --zero n \ -c 512K \ --thinpool docker/thinpool \ --poolmetadata docker/thinpoolmeta
- Configure autoextension of thin pools using an lvm
profile:
sudo vi /etc/lvm/profile/docker-thinpool.profile
- Specify
thin_pool_autoextend_threshold
andthin_pool_autoextend_percent
values:thin_pool_autoextend_threshold
is the percentage of space used before lvm attempts to autoextend the available space (100 = disabled, not recommended).thin_pool_autoextend_percent
is the amount of space to add to the device when automatically extending (0 = disabled).
The following example adds 20% more capacity when the disk usage reaches 80%:
activation { thin_pool_autoextend_threshold=80 thin_pool_autoextend_percent=20 }
Save the file.
- Apply the lvm profile using the
lvchange
command:sudo lvchange --metadataprofile docker-thinpool docker/thinpool
- Enable monitoring for logical volumes on your
host:
sudo lvs -o+seg_monitor
Without this step, automatic extension does not occur even in the presence of the lvm profile.
- If you have ever run Docker on this host before, or if the /var/lib/docker/
directory exists, move the directory so that Docker can use the new lvm pool to store the contents
of image and
containers:
mkdir /var/lib/docker.bk mv /var/lib/docker/* /var/lib/docker.bk
Note: If any of the following steps fail and you need to restore, you can remove /var/lib/docker and replace it with /var/lib/docker.bk. - Edit /etc/docker/daemon.json and configure the options needed for the
devicemapper storage driver. If the file was previously empty, it should now contain the following contents:
{ "storage-driver": "devicemapper", "storage-opts": [ "dm.thinpooldev=/dev/mapper/docker-thinpool", "dm.use_deferred_removal=true", "dm.use_deferred_deletion=true" ] }
- Start Docker:
- systemd:
sudo systemctl start docker
- service:
sudo service docker start
- systemd:
- Verify that Docker is using the new configuration using the
docker info
command.If Docker is configured correctly, the Data file and Metadata file are blank, and the pool name is
docker-thinpool
. - After you have verified that the configuration is correct, you can remove the
/var/lib/docker.bk directory which contains the previous
configuration:
rm -rf /var/lib/docker.bk
- Identify the block device you want to use.
- Method: loop-lvm for proof-of-concept onlyAttention: For a proof-of-concept deployment, loop-lvm mode can be used. However, loopback devices are slow and resource-intensive, and they require you to create files on disk at specific sizes. They can also introduce race conditions. This is why production hosts using the devicemapper storage driver must use direct-lvm mode, which uses block devices to create the thin pool. The direct-lvm mode is faster than using loopback devices, uses system resources more efficiently, and block devices can grow as needed.
To configure loop-lvm mode for a proof-of-concept deployment, complete the following steps:
- Stop Docker by running the following command:
sudo systemctl stop docker
- Edit the /etc/docker/daemon.json file (if it does not exist, create it
now).
Add or append the following lines into the file:
{ "storage-driver": "devicemapper" }
- Start Docker by running the following
command:
sudo systemctl start docker
- Verify that the daemon is using the devicemapper storage driver by running the
docker info
command and looking for Storage Driver.
- Stop Docker by running the following command:
- Method: direct-lvm
-
Disable swap on each server.
On each of your servers, you must disable swap to ensure that the kubelet component functions correctly.
- Disable swap by running the following command:
swapoff -a
- Edit the /etc/fstab file and comment out the following statement to ensure
that swap is not enabled after an operating system restart:
# /dev/mapper/rhel-swap swap swap defaults 0 0
If the statement does not appear in the file, skip to step 5.
- If you made any changes to the /etc/fstab file, run the following command
to apply the change:
mount -a
- Disable swap by running the following command:
-
Install kubeadm, kubelet, and kubectl on each server.
On each server, you will install the following packages:
- kubeadm: the command to bootstrap the cluster
- kubelet: the component that runs on all of the machines in your cluster and manages tasks such as starting pods and containers
- kubectl: the command line utility used for communicating with the cluster
Install the packages by running the following commands:sudo bash -c 'cat << EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF' setenforce 0 yum install -y kubelet-1.11.9* kubeadm-1.11.9* kubectl-1.11.9* --disableexcludes=kubernetes systemctl enable kubelet && systemctl start kubelet
Note: Thesetenforce 0
command disables SELinux to allow containers to access the host file system (required by pod networks, for example). You must include this command until SELinux support is improved in the kubelet component.Ensure that the packages do not upgrade to a later version by running the following command to disable the kubernetes yum repo:yum-config-manager --disable kubernetes*
Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. To avoid this problem, run the following commands to ensure thatnet.bridge.bridge-nf-call-iptables
is set to 1 in your sysctl config:sudo bash -c 'cat << EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF' sysctl --system
-
On the master, create a kubeadm-config.yaml template file. The contents of
the file will vary depending on whether or not you want to enable the
PodSecurityPolicy admission plugin (see "Pod Security Policies" for more
info).
To enable the PodSecurityPolicy admission plugin, create the file with the following contents:
If you do not want to enable the PodSecurityPolicy admission plugin, create the file with the following contents:sudo bash -c 'cat << EOF > kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration apiServerExtraArgs: enable-admission-plugins: PodSecurityPolicy kubernetesVersion: v1.11.9 networking: # This CIDR is a Calico default. Substitute or remove for your CNI provider. podSubnet: "192.168.0.0/16" EOF'
sudo bash -c 'cat << EOF > kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.9 networking: # This CIDR is a Calico default. Substitute or remove for your CNI provider. podSubnet: "192.168.0.0/16" EOF'
-
Initialize the master.
kubeadm init --config=kubeadm-config.yaml
The output will look like the following example:[init] Using Kubernetes version: vX.Y.Z [preflight] Running pre-flight checks ... (log output of initialization workflow) ... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root:kubeadm join --token token master-ip:master-port --discovery-token-ca-cert-hash sha256:hash
Note: Make a record of the kubeadm join command that is displayed in thekubeadm init
command output; you will need this command to join nodes to your cluster. -
To make kubectl work, run the following commands on the master (as shown in the sample
output):
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
(All-in-one only) If you are installing an all-on-one-VM deployment, run the following command
to allow all of the Component Pack pods to run on the master:
kubectl taint nodes --all node-role.kubernetes.io/master-
-
If you enabled the PodSecurityPolicy admission plugin in step 6, then you
need to download the Component Pack installation zip to the master node, extract the file
privileged-psp-with-rbac.yaml and apply it so that system pods are able to
start in the kube-system namespace:
unzip -p IC-ComponentPack-6.0.0.8.zip microservices_connections/hybridcloud/support/psp/privileged-psp-with-rbac.yaml > privileged-psp-with-rbac.yaml
To allow system pods to start in the kube-system namespace, apply the yaml file:kubectl apply -f privileged-psp-with-rbac.yaml
-
Install a pod network add-on so that your pods can communicate with each other.
The network must be deployed before any applications. An internal helper service, kube-dns, will not start up before a network is installed. As mentioned already, Calico is the network add-on chosen in this guide. The version of Calico is v3.3 To install Calico, run the following commands on the master:
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
-
Join your worker nodes to the cluster.
The worker nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster, run the command that was output by kubeadm init command. For example:
kubeadm join --token Token Master_IP_address:Master_Port --discovery-token-ca-cert-hash sha256:Hash
The output looks like the following snippet:[preflight] Running pre-flight checks... (log output of join workflow) ...Node join complete:* Certificate signing request sent to master and response received.* Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join.
A few seconds later, you will see this node in the output when you run the
kubectl get nodes
command on the master. -
Control your cluster from the worker nodes.
To get kubectl on the worker nodes to talk to your cluster, copy the administrator kubeconfig file from your master to your workers by running the following commands on every worker:
mkdir -p $HOME/.kube scp root@Master_IP_address:$HOME/.kube/config $HOME/.kube sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
Install Helm.
Helm is the Kubernetes package manager that is required for installing IBM Connections™ Component Pack services. Helm version 2.11.0 is the recommended version for Kubernetes v1.11, To install Helm, run the following commands on the master:
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz tar -zxvf helm-v2.11.0-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm helm init kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default sudo rm -f helm-v2.11.0-linux-amd64.tar.gz
-
Verify the deployment.
To verify that the Kubernetes deployment is fully functional and ready to host services, run the following command and make sure that all pods are listed as "Running":
kubectl get pods -n kube-system
If you encounter problems with kubeadm, consult the "Troubleshooting kubeadm" section of the Kubernetes Setup documentation.