Kubernetes QuickStart Deployment
This section provides information to set up a basic, pilot non-production Kubernetes cluster fairly quickly to use to host a non-production Sametime Meeting server.
Before you begin
- Download Docker CE.
- Download Kubernetes v1.16.0 or later with an ingress controller. System should be
resolvable in DNS and hostname should be set correctly:
hostnamectl status hostnamectl set-hostname meetings.company.com
The following steps support ingress install on 1.16 through 1.19 and are not compatible with later versions of K8S. If using a later version of K8S you will need to manually deploy ingress to match your version. Follow instruction at inatallation Guide for example, if you want ot nginx ingress controller.
- Swap should be disabled:
sed -i '/swap/ s/^/# /' /etc/fstab
# recorderNodeRole: recorder
# videoNodeRole: video
# mainNodeRole: main
About this task
To set up a Kubernetes non-production, single-node cluster running on a CentOS7 or RHEL 7 box, complete the steps in this procedure. When you're done, follow the procedure Installing Sametime Meetings with Kubernetes to install a non-production Sametime Meeting server.
- Prepare the Linux system.
- Verify that the system can be resolved by DNS:
hostnamectl status
- Verify if the host name is set correctly:
hostnamectl set-hostname meetings.company.com
- Disable swap:
sed -i '/swap/ s/^/# /' /etc/fstab
- Verify that the system can be resolved by DNS:
- Enter the following commands to install Docker
CE:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo yum install -y docker-ce mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF mkdir -p /etc/systemd/system/docker.service.d systemctl daemon-reload systemctl enable docker systemctl restart docker
- Enter the following commands to install
Kubernetes:
cat
<<EOF >
/etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet - Enter the following commands to create the internal Kubernetes cluster network and
untaint the master node : Note: The following commands creates the network 192.168.0.0/16. If you already use this network in your DMZ, specify a different network.
export POD_CIDR=192.168.0.0/16 kubeadm init --pod-network-cidr=$POD_CIDR mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config kubectl taint nodes --all node-role.kubernetes.io/master-
- Enter the following commands to install Calico
3.9:
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml curl -O https://docs.projectcalico.org/v3.9/manifests/calico.yaml sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml kubectl apply -f calico.yaml
- Enter the following commands to install
Helm:
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
- Enter the following command to verify the configuration:
# kubectl get pods -n kube-system The output should look similar to the following: NAME READY STATUS RESTARTS AGE calico-kube-controllers-6b9d4c8765-8tlsb 1/1 Running 0 5m55s calico-node-4wwff 1/1 Running 0 5m55s coredns-6955765f44-79g6n 1/1 Running 0 7m5s coredns-6955765f44-lns5m 1/1 Running 0 7m5s etcd-xxx.xxx.xxx 1/1 Running 0 6m52s kube-apiserver-xxx.xxx.xxx 1/1 Running 0 6m52s kube-controller-manager-xxx.xxx.xxx 1/1 Running 0 6m52s kube-proxy-2mtg5 1/1 Running 0 7m5s kube-scheduler-xxx.xxx.xxx 1/1 Running 0 6m52s
- Enable Ingress, which Sametime Meetings requires to allow inbound web traffic: Note: The following steps support ingress install on 1.16 through 1.19 and are not compatible with later versions of K8S. If using a later version of K8S, you need to manually deploy ingress to match your version. Follow the instructions at Installation Guide for example, if you want an nginx ingress controller. Refer to the article, Installing HCL Sametime Meetings 11.6 IF1 on newer versions of Kubernetes for more details.
- If not done already, download and extract sametime_meetings.zip from Flexnet.
- Run the following command from the directory that contains the extracted files:
kubectl apply -f kubernetes/ingress/mandatory.yaml
- To apply custom certificates to your ingress controller, obtain the certificate(s) and private
key. Then, run the following command to configure the ingress to use them. For KEY_FILE specify the
private key file and for CERT_FILE specify the certificate(s) file.
export CERT_NAME=ingress-tls-cert export KEY_FILE=privkey.pem export CERT_FILE=fullchain.pem kubectl -n ingress-nginx create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE} kubectl patch deployment nginx-ingress-controller -n ingress-nginx --patch "$(cat kubernetes/ingress/nginx-tls-patch.yaml)"
To apply, you will need to restart the ingress controller:
kubectl scale deployment nginx-ingress-controller -n ingress-nginx --replicas=0 kubectl scale deployment nginx-ingress-controller -n ingress-nginx --replicas=1
- To enable enable EFK (Elasticsearch, Fluentd, Kibina) stack global logging:
- Run the following commands from the directory where you extracted
sametime_meetings.zip to enable global logging on the system.
kubectl create namespace logging kubectl create -f kubernetes/logging/elastic.yaml -n logging kubectl create -f kubernetes/logging/kibana.yaml -n logging kubectl create configmap fluentd-conf --from-file=kubernetes/logging/kubernetes.conf --namespace=kube-system kubectl create -f kubernetes/logging/fluentd-daemonset-elasticsearch-rbac.yaml
- To access logs, run the following commands:
# kubectl get service -n logging The output should look similar to the following: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch NodePort 10.102.149.212 <none> 9200:30531/TCP 17m kibana NodePort 10.106.226.34 <none> 5601:32683/TCP 74s
The IP:port 10.106.226.34 in the example above is where kibana may be accessed. You must tunnel to that port via SSH -L 5601:10.106.226.34:32683 or use kube-proxy or some other ingress mechanism to access from a remote machine.
- Run the following commands from the directory where you extracted
sametime_meetings.zip to enable global logging on the system.
- To enable monitoring with Prometheus:
- Run the following commands from the directory where you extracted
sametime_meetings.zip:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts kubectl create namespace monitoring helm install -n monitoring prometheus --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false prometheus-community/kube-prometheus-stack
Note: If auto scaling is not needed and the necessary Kubernetes APIs for monitoring with Prometheus are not configured, you must delete the following files:- helm/charts/jibri/templates/recorder-servicemonitor.yaml
- helm/charts/video/templates/video-servicemonitor.yaml
- To access the dashboards, run the following command:
# kubectl get service -n monitoring The output should look similar to the following: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-main ClusterIP 10.102.208.148 <none> 9093/TCP 3m28s alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 3m28s grafana ClusterIP 10.99.202.138 <none> 3000/TCP 3m27s kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 3m27s node-exporter ClusterIP None <none> 9100/TCP 3m27s prometheus-adapter ClusterIP 10.96.21.117 <none> 443/TCP 3m26s prometheus-k8s ClusterIP 10.108.84.189 <none> 9090/TCP 3m26s prometheus-operated ClusterIP None <none> 9090/TCP 3m26s prometheus-operator ClusterIP None <none> 8443/TCP
The IP(s):port(s) 10.108.84.189:9090, 10.102.208.148:9093, 10.99.202.138:3000 are the respective dashboards for Prometheus, Alertmanager, and Grafana. You must tunnel to those ports via SSH -L
9090:10.108.84.189:9090 -L 10.102.208.148:9093 -L 3000:10.99.202.138:3000
or use kube-proxy or some other ingress mechanism to access from a remote machine.
- Run the following commands from the directory where you extracted
sametime_meetings.zip: