Kubernetes on Raspberry Pi, part 2

Welcome back!

And here we go again with the next part of the Kubernetes on Raspberry Pi tutorial. Previously, we put together all the cluster components and configured all the elements required to install Kubernetes. In this part, we will perform the actual installation and configure the Kubernetes Dashboard to give us a visual overview of what’s going on in our cluster.

Kubernetes installation

For this part of the tutorial it will be useful to again use a terminal with multi-session support (as described here in the Set up networking section)
  • ssh into all the nodes (master + workers) – the following steps are performed on all of them:
				
					ssh pi@<node ip>
				
			
  • add a new trusted key required for Kubernetes download/installation
				
					curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
				
			
  • add a new apt repository by executing the following:

 

				
					cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
				
			
  • install kubernetes version 1.19.6:

				
					sudo apt-get update
sudo apt-get install -y kubelet=1.19.6-00 kubeadm=1.19.6-00 kubectl=1.19.6-00
				
			
Important: the kubelet and kubectl versions must match the version of kubeadm. More details can be found here.
  • maintain the current versions of those packages:
				
					sudo apt-mark hold kubelet kubeadm kubectl
				
			
  • enable control groups (cgroups) – see some more info here:
				
					sudo nano /boot/cmdline.txt
				
			

append the following on the same line:

				
					cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
				
			
  • save the file and reboot all the nodes
				
					sudo reboot
				
			

The following steps are performed only on the master node:

  • pull all the images required by kubeadm:
				
					sudo kubeadm config images pull -v3
				
			
  • initialise a Kubernetes control-plane node:
				
					sudo kubeadm init --token-ttl=0
				
			
Important: tokens with a TTL of 0 pose a security risk – never do that in production!

This step will give you some code you need to run on your master node. Go ahead and run it – it should like similar to that:

				
					mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config
				
			

Additionally, you should see another command that will let you join the worker nodes to the cluster. It should look like:

				
					sudo kubeadm join 192.168.1.100:6443 \
     --token v86cmn.sg6fyr.... \ 
     --discovery-token-ca-cert-hash sha256:aa901....bfba0
				
			
  • before we join the worker nodes though, we need to install a networking plugin that will allow the nodes and resources located within them to communicate with each other:
				
					kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
				
			
  • there is one issue with the Weave Net plugin installed in the previous step – the Docker image version that it uses is not compatible with the ARM-based systems (see more info in this GitHub issue). To make it work we need to downgrade the image version from 2.7.0 to 2.6.5 by editing the daemonset configuration:
				
					KUBE_EDITOR="nano" kubectl edit daemonset weave-net -n kube-system
				
			

In the container section, edit the image to something like this:

				
					image: docker.io/weaveworks/weave-kube:2.6.5
				
			
  • close and save the file – in a minute or two the plugin should come up. You can check that by running:
				
					kubectl get pods --all-namespaces
				
			

This will show you the status of all the pods (in all namespaces). You should see a pod with the name weave-net-xxxx and a status ContainerCreating or Running. You can also append the -w flag to that command – this will keep refreshing the status of the pods – after a while you should see that all the pods are Running.

Finally, we are ready to join the other nodes to the cluster. On your worker nodes, execute the command you were given in one of the steps above:

				
					sudo kubeadm join 192.168.1.100:6443 \
     --token v86cmn.sg6fyr.... \ 
     --discovery-token-ca-cert-hash sha256:aa901....bfba0
				
			

On the master you can run:

				
					kubectl get nodes -w
				
			

You should now see your worker nodes joining the cluster as they are reporting a Ready status!

Cluster access from your local machine

So far, we executed all the commands directly on one of the cluster nodes. That is, however, not very convenient for our future work – we do not necessarily want to ssh into the cluster every time. 

Kubernetes uses the so-called kubeconfig file to configure access to clusters. This file was created when we initialised the cluster and was placed under ~/.kube/config. For us to be able to access the cluster, we need to copy that file from the master node to our local machine first. If you are using a Unix-based system, you can do it using the scp command. On your local machine execute:

				
					scp pi@master_ip:~/.kube/config ~/.kubeconfig-local
				
			

This will copy the config file from the master node to your home directory (remember to replace the master_ip with the actual IP address of your master node). 

Now, you will need to install the kubectl tool on your local machine. As pointed out above, remember that the version of kubectl must correspond to the version of kubeadm on your cluster. For OS-dependent installation instructions go to the official documentation.

Finally, you should be able to access the cluster from your local machine. If your kubeconfig is stored in a location different from the default ~/.kube/config you need to tell kubectl which config file it should use. You can do it by passing the location in a KUBECONFIG environment variable:

				
					export KUBECONFIG=~/.kubeconfig-local
				
			

That’s it. Now, you should be able to use kubectl on your local machine to execute commands directly in your cluster. To check that it all worked just run:

				
					kubectl get nodes
				
			

If everything is fine you should see a list of your cluster nodes (as we already did before).

Kubernetes Dashboard

While this step is optional, I do think there is some benefit in installing Kubernetes Dashboard. Particularly for new users, it gives a nice visual overview of resources configured in the cluster, their usage and potential ongoing issues/errors. It also allows deploying new resources to the cluster. If you want to install the dashboard, just follow the steps described below.

  • apply the k8s manifest for the required dashboard resources:
				
					kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
				
			
  • all the new resources will be deployed in a new kubernetes-dashboard namespace so if you want to see what got deployed and with what status, you can do:

				
					kubectl get all -n kubernetes-dashboard
				
			
  • to be able to log in you need to create a new user with the right set of permissions – just follow the steps described here – after that you should be able to log in using a Bearer Token

  • on your local machine execute:

				
					kubectl proxy
				
			
This will make your dashboard available at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Note: The Dashboard UI is only accessible from the machine where the above command is executed.
That’s all! Feel free to browse around and explore the dashboard. Go through different namespaces, check out what pods are already running in the cluster and how much resources they are using. We will come back here later to follow resource usage during our genome analyses.

This concludes the second part of the Kubernetes on Raspberry Pi tutorial. In this part you installed and configured a functional k8s cluster and deployed a Kubernetes Dashboard for easy cluster monitoring. In part 3 we will add some persistent storage to our cluster where we will store the genomes and all analysis results. See you there!

References

Leave a Reply