How I built a Raspberry Pi home Kubernetes cluster

Are there any Kubernetes fans here? I have been using it for quite some time both at work and in other places where I do private projects, but sometimes I need a place where I can quickly and inexpensively develop and test new functions, or just, as they say, “play with the program” , copy data to backup storage, exchange files, or something similar.

I calculated everything and realized that the total cost of the cluster is lower than the cost of cloud offerings of similar computing power with the same number of nodes. Is there anything else to explain? ..


Equipment

Why Raspberry Pi?

TL; DR. The main reasons are cost and processing power.

A cluster of four nodes quadruples the characteristics of each of the mini-computers (1.5 GHz, 4 ARM CPU cores and 4 GB RAM), that is, as a result, we get 16 1.5 GHz cores and 16 GB RAM.

Preparing a memory card

We start by loading the operating system, and this will be the most time-consuming part of the project. I work with Docker and Kubernetes most of my time, and one of my favorite things to do is keeping Docker image sizes to an absolute minimum. Most often I use Alpine Linux, so I will build my cluster on this distribution kit.

We go to the section Alpine Linux Downloads and select the AARCH64 version for Raspberry Pi 4 Model B.

While the distribution is loading, we will prepare a memory card: format it for the FAT32 file system. I’m an OSX fanatic, so to get the disk ID of a memory stick I usually use this command:

diskutil list

To format the entire memory card (I named it RPI), run this command:

sudo diskutil eraseDisk FAT32 RPI MBRFormat /dev/diskX

Unpack the downloaded package from Alpine linux and drop it onto the card:

sudo tar xf alpine-rpi-3.12.1-aarch64.tar.gz -C /Volumes/RPI

Basic system setup

Congratulations, you are one step closer to the Kubernetes world, and where? At home! Insert the memory stick into your Raspberry Pi, monitor or TV, connect to the keyboard and turn on the power. After the system boots up and prompts you to log in, use root… The setup starts with this command:

setup-alpine

I’m running a cluster at home, and my router doesn’t have the required number of free ports, so I decided to use a Wi-Fi network. There are not so many options here, but in any case, it is worth considering each.

After completing the setup, there are a few more things to do. Alpine starts from RAM by default, but it would be better if our changes were saved to disk so that after possible reboots, we don’t have to re-enter them every time.

apk update
apk add cfdisk e2fsprogs # Install disk tools
cfdisk /dev/mmcblk0      # Run cfdisk on your memory card

Here’s what to do:

  • Resize the FAT32 partition to a reasonable minimum – in my case, I set it to 1 GB.

  • Use all the remaining free space to create a new boot partition.

  • Remember to write down the changes you just made.

Helpful guide: How to work with cfdisk.

To complete the whole process, you need to run a few more commands:

mkfs.ext4 /dev/mmcblk0p2  # Format newly created partition as EXT4
mount /dev/mmcblk0p2 /mnt # Mount it
setup-disk -m sys /mnt    # Install system files
mount -o remount,rw /media/mmcblk0p1 # Remount old partition in RW
# Let's do some housekeeping 
rm -f /media/mmcblk0p1/boot/*  
cd /mnt
rm boot/boot
mv boot/* /media/mmcblk0p1/boot/  
rm -Rf boot
mkdir media/mmcblk0p1
ln -s media/mmcblk0p1/boot boot

Update records /etc/fstab

echo "/dev/mmcblk0p1 /media/mmcblk0p1 vfat defaults 0 0" >> etc/fstab
sed -i '/cdrom/d' etc/fstab
sed -i '/floppy/d' etc/fstab
cd /media/mmcblk0p1

And – finishing touches after a system reboot: keep in mind that if you do not enable the appropriate cgroups, the kubeadm step will fail.

# Enable edge repository for Alpine 
sed -i '/edge/s/^#//' /mnt/etc/apk/repositories
# Force use of new partition as the root one
sed -i 's/^/root=/dev/mmcblk0p2 /' /media/mmcblk0p1/cmdline.txt
# Make sure that appropriate cgroups are enabled
echo "cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1" >> /media/mmcblk0p1/cmdline.txt
sed -i ':a;N;$!ba;s/n/ /g' /media/mmcblk0p1/cmdline.txt
rc-update add wpa_supplicant boot # Make sure your wifi will come back up after restart

And finally, a very important thing after all the steps is to make backup copies of the changes made and reboot the system.

lbu_commit -d
reboot

Configuring other system parameters

I already wrote above that I am going to use a Kubernetes cluster at home. But, anticipating your question, I will answer: yes, it will work in the office network, but in this case, you will need to add something else.

Determine the hostname on the local network with the avahi daemon

Why is this step needed? And then, it’s much easier to run the ssh pi0.local command than fiddling with the corresponding IP address. Networking and configuring clustering parameters will then become much easier, especially if you cannot use static IP addresses.

apk add dbus avahi
rc-update add dbus boot   # avahi won't start without dbus
rc-update add avahi-daemon boot

Allow ssh root access

Modify the / etc / ssh / sshd_config file – add the following line to it to give ssh root access.

PermitRootLogin yes

Install Docker, Kubernetes and remaining packages. We’ll need them later.

apk update
apk add kubernetes docker cni-plugins kubelet kubeadm
rc-update add docker default
rc-update add kubelet default

Strength can be saved

At this step, everything should be ready. As a final step, I turned off the Raspberry Pi, inserted the memory card back into the laptop and created an image so that I could duplicate it on the remaining three cards and thus save time.

Remember: to avoid conflicts, you need to change the contents of / etc / hostname for each newly created computer. I named the computers pi0, pi1, and pi2 (which is easier to remember) and entered these names into the local ssh configurator (there are no restrictions on naming).

Creating a Kubernetes master node

service docker start
kubeadm config images pull   # Get the necessary images
kubeadm init --pod-network-cidr=10.244.0.0/16

If you see any errors related to cgroups stopping the initialization process, then you probably missed one of the steps. If everything is done correctly, a message should appear that the initialization of the Kubernetes control plane was successful: Your Kubernetes control-plane has initialized successfully!

Save the output of the command that starts with kubeadm join in a safe place. It will be needed to add the remaining nodes to the cluster.

To store credentials in your home directory, run these commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

How do I access a node?

To avoid possible conflicts, I copied the contents of the $ HOME / .kube / config folder from the node to the local machine, changing a few defaults. As a result, I was able to use tools like kubectl and k9s from my laptop, and I can be sure that I will always get to the right server.

The master node is running, what else needs to be done?

It is necessary to provide a network connection between pods – without it, the node will have a mark (taint) and always remain in the NotReady state – “not ready”.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

By default, nothing can be deployed on the master node., and the node will be displayed with a taint, but don’t worry – we can change this with the command

kubectl taint nodes --all node-role.kubernetes.io/master-

Now for the dashboard. Show me the person who doesn’t like having a dashboard in the software! Kubernetes has its own and fairly universal dashboard, and with it you can view the entire cluster and anything inside it.

# Add kubernetes-dashboard repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# Deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard chart
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --set protocolHttp=true,ingress.enabled=true,rbac.create=true,serviceAccount.create=true,service.externalPort=9090,networkPolicy.enabled=true,podLabels.app=dashboard

As you may have noticed, I did a pretty good job of digging into the Helm chart settings, but there were reasons for that.

Your dashboard will work, but … it won’t show anything because there are no permissions yet.

kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=default:kubernetes-dashboard

We’re almost there. We have a master node and a dashboard, but we do not have access to it at the moment. Of course, you could use nodePort to access the dashboard, but we will go the other way – we will get access using Kubernetes, and for this we need the loadBalancer load balancer.

The node works in the local network, so we cannot count on any “goodies” from AWS or GoogleCloud, but there is nothing to be afraid of – this problem, in principle, can be solved.

Home network load balancing

Follow the installation instructions from MetalLB until the end of the section Installation By Manifest

ifconfig wlan0 promisc  # Set PROMISC mode for WiFi - for ARP

This command will run as long as the Pi is powered on. In order not to do unnecessary work creating startup scripts, I decided to change the /etc/network/if-up.d/dad file and set the promiscuous mode: in it, the network card allows all packets to be received, regardless of who they are addressed to.

#...
        ip address show dev $IFACE | grep -q " $1 "
        ip link set $IFACE promisc on
#...

Create the following manifest: my-dashboard.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.50.200-192.168.50.250
---
apiVersion: v1
kind: Service
metadata:
  name: k8s-dashboard
  annotations:
    metallb.universe.tf/address-pool: default
spec:
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: dashboard
  type: LoadBalancer

Don’t forget to change the address section according to your local network settings.

kubectl apply -f my-dashboard.yaml
kubectl get svc k8s-dashboard 

Now, in my case, the dashboard can be accessed at http://192.168.50.200/.

Raspberry Pi based k8s cluster.
Raspberry Pi based k8s cluster.

This article, imbued with a love of experimentation, was written after a Sabbath day brightened with several cans of energy drink.

Cluster Pods Overview Presented by K9S
Cluster Pods Overview Presented by K9S

Adding additional nodes

I adhere to the principles of DRY (Don’t Repeat Yourself) and KISS (Keep It Stupid Simple), so I will not repeat anything, but I will explain everything in simple words. Go back to the beginning of the article and on the newly created nodes, repeat all the steps up to the place “Create a master node”, then run the following command (do not forget to replace the IP address with the IP address of the master nodes or specify the hostname pi0.local. For this opportunity special thanks to avahi-daemon):

service docker start
kubeadm config images pull
kubeadm join 192.168.50.132:6443 --token dugwjt.0k3n --discovery-token-ca-cert-hash sha256:55cfadHelloSuperSecretHashbf4970f49dcadf533f86e3dba

Tip: if you forgot to copy the kubeadm command while creating the master node, don’t be discouraged, just run the following command on the master node and the kubeadm command will print. And if you want to pump yourself up to DevOps engineer – come to study and become a scarce and very highly paid specialist.

find outhow to level up in other specialties or master them from scratch:

Other professions and courses

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *