Kubernetes setup – take II

Well, this is odd. This blog was supposed to be about so much more than setting up clusters. After a few (by now) years of using the cluster based on a Raspberry Pi architecture (see the other posts: part 1, part 2 and part 3) I decided to re-create that infrastructure using a Raspberry Pi-free hardware. The main reason was the difficulty of finding some Arm-based Docker images which could be run on that cluster (yes, one can also re-build those but it would be so much easier to just use them out-of-the-box). I did try running a multi-architecture Kubernetes cluster, however, it was more trouble than it was worth. Finally, I decided to turn my little desktop server into a new single-node cluster. As a few years have passed between the first attempt and now (i.e., a few Kubernetes versions), this post attempts to document the steps required to create a new, cluster from scratch. Here we go – Kubernetes cluster setup, take II!

Hardware and other requirements

As mentioned above, the cluster will consist of a single node. These are the two machines I am going to use in my setup:

  • master node: GMKtec NucBox Intel Celeron J4125
  • worker node: custom server running on AMD Ryzen 7 5700G + 32GB DDR4 RAM

Both of those run Ubuntu Server 22.04 as the OS. To install it, you will need a bootable USB stick – check out the official Ubuntu documentation for detailed instructions. The OS should be installed on both systems before you continue. When prompted, add Docker to your installation. You should install the OpenSSH server when given a choice to do so – we will be doing some SSH-ing here and there.

Important: this tutorial describes installation of Kubernetes v1.28 – it may or may not work with other versions.

Prepare the OS for k8s installation

Container runtime setup

Following the last reboot of both machines, let’s log in as the admin user and set up up the container runtime we would be using. There are many options available (see here for an overview) – here we are using container.d. Let’s start by installing it:

				
					sudo apt install containerd.io=1.6.25-1
				
			

and genenerating the required config file:

				
					sudo bash -c 'containerd config default > /etc/containerd/config.toml'
				
			

Next, we need to tell containerd to use the systemd cgroup driver (see the documentation). To do that we open the config file generated above and modify the following lines:

				
					[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
				
			

Finally, we restart containerd:

				
					sudo systemctl restart containerd
				
			

Fixed IP address

Then, we modify the /etc/network/interfaces file to obtain a fixed IP address for both, the master and the worker, nodes. Open the file (you’ll need sudo to do that) and add the following lines: 

				
					auto wlo2
iface wlo2 inet static
    address 192.168.2.200
    netmask 255.255.255.0
    gateway 192.168.1.1
    dns-nameservers 8.8.8.8
				
			

Keep in mind that wlo2 indicates the name of the network interface we are using and 192.168.2.200 indicates the IP address we want to obtain for that machine. You will need to adjust both of those to your specific network.

Additionally, we should add the host name for each of our nodes (master and worker) to the list in /etc/hosts on every node- see below for an example for the master node (pay attention to the second line):

				
					127.0.0.1 localhost
127.0.1.1 k8s-master
192.168.2.201 k8s-worker-01


# The following lines are desirable for IPv6 capable hosts
...
...
...
				
			

SWAP file

Finally, we need to disable SWAP, as per Kubernetes instructions. To do that we need to edit the /etc/fstab file:

				
					sudo nano /etc/fstab
				
			

Now, identify the line referring to swap and comment it out by adding a # symbol at the beginning. The last few lines of the file should look like this:

				
					# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/9e2e0f87-403e-4a46-a095-f31db59dbdb7 /boot ext4 defaults 0 1
# /boot/efi was on /dev/sda1 during curtin installation
/dev/disk/by-uuid/E9E3-80F2 /boot/efi vfat defaults 0 1
#/swap.img      none    swap    sw      0       0
				
			

Finally, we remove the existing swap file and reboot both machines:

				
					sudo rm /swap.img
sudo reboot
				
			

SSH access

Following the reboot, we will generate an SSH key and upload it to both machines in order to be able to access them remotely. Let’s start by generating a new SSH key pair on your “local” computer:

				
					ssh-keygen -t ed25519 -f $HOME/.ssh/id_k8s
				
			

This will generate a pair named id_k8s in your local .ssh directory. You can now copy the public key to both, the master and worker, nodes:

				
					ssh-copy-id -i $HOME/.ssh/id_k8s.pub k8s@<IP of your machine>
				
			

where k8s is the user name on both machines (you may need to adjust it to your use case). 

Enable kernel modules

Additionally, we need to enable some kernel modules (overlay and br_netfilter) which are required for the between-container communication and network policy enforcement to work properly. Let’s open the following file for editing:

				
					sudo nano /etc/modules-load.d/containerd.conf
				
			

and add the following lines:

				
					overlay 
br_netfilter
				
			

To apply the changes immediately run:

				
					sudo modprobe overlay
sudo modprobe br_netfilter
				
			

Kubernetes installation

We are ready to install all the required Kubernetes tools by following the instructions from the official documentation. Below are all the steps I followed.

CRI-dockerd adapter

This adapter will allow you to control Docker via the Kubernetes Container Runtime Interface. You can install it from here by following the official instructions:

				
					curl -sL https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.7/cri-dockerd_0.3.7.3-0.ubuntu-jammy_amd64.deb -o cri-dockerd.deb
sudo dpkg -i ./cri-dockerd.deb
sudo apt-get install -f
				
			

Other dependencies

Install other required dependencies:

				
					sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
				
			

Kubernetes installation

In order to install k8s, we need to add the public singing key of the official Kubernetes repository. This is version-specific and here we are adding v1.28 as this is the version which will be installing later:

				
					curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt- keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt- keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
				
			

Finally, we can install Kubernetes packages and put their version “on hold” to prevent future upgrades:

				
					sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
				
			

Remaining configuration

We will update the system networking configuration by opening the kubernetes.conf file:

				
					sudo nano /etc/sysctl.d/kubernetes.conf
				
			

and adding the following lines:

				
					net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1
				
			

You can now close the file and reload the configuration to apply changes:

				
					sudo sysctl --system
				
			

Adjust kubelet’s configuration by opening: 

				
					sudo nano /etc/default/kubelet
				
			

and adding:

				
					KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs"
				
			

Cluster initialization

Important: These steps should be performed only on the master node!

We begin by pulling all the required images:

				
					sudo kubeadm config images pull
				
			

followed by cluster initialization:

				
					sudo kubeadm init --control-plane-endpoint=k8s-master --pod-network-cidr=10.1.0.0/16
				
			

The previous command should have printed a bunch of instructions for you to follow – here are the three lines you should be seeing and which you should now execute:

				
					mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config
				
			

To correctly set the kube config path at login, add the following line to your .bashrc file

				
					export $HOME/.kube/config
				
			

Networking plugin

Important: These steps should be performed only once, on the master node!

Before we continue with any other steps, we need to install a networking plugin for our pods to be able to communicate with each other. There are many different options available – here, we are using Calico. We begin by installing the Tigera Calico operator by creating the appropriate resources using the manifest:

				
					kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
				
			

Next, we need to fetch the additional resource definitions and edit them to match the IP address we used for cluster creation:

				
					curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml -O
				
			

In the custom-resources.yaml file, edit the cidr block to match the IP address passed to the kubeadm init command. The file should look something like:

				
					spec:
  # Configures Calico networking.
  calicoNetwork:
    ipPools:
    - blockSize: 26
      cidr: 10.1.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
				
			

Finally, install the definitions:

				
					kubectl apply -f custom-resources.yaml
				
			

Final remarks

There is one last thing remaining: setting up access from your local machine. These steps do not differ from what I already described before so feel free to check out the original post: Kubernetes on Raspberry Pi, part 2.

Leave a Reply