Preparing k8s cluster on OrangePi 3 LTS
Introduction
Hi! In my work, I regularly implement various solutions on a Kubernetes cluster. For testing projects, it is very important to have a test environment that is inexpensive, easy to maintain and, if necessary, able to support not very loaded applications in production.
The easiest way is to use virtual machines or various container solutions (like kind (Kubernetes in Docker) ), but I don't like this approach due to the limitations of virtualization and resources. I want to create a cluster that can be used in real business and that will provide reliability in case of failures. We will prepare the cluster using the utility kubeadm.
An ideal and budget solution is a microcomputer based on ARM architecture, such as the Orange Pi 3 LTS.
I have heard of Russian analogues, such as Repka Pi, but have no experience with them yet, and Raspberry Pi, although it has many modules, is more expensive. Orange Pi 3 LTS is compact, quite powerful and comes with an image of OC Debian 11. This device is equipped with 4 cores, 2 GB of RAM and a processor with a clock frequency of 1.8 GHz. The cost of this device, at the time of writing, is quite democratic – about 4000 ₽.
Stand
To prevent our cluster from creating a mess, I modeled and printed a layout for the elements in a 3D editor. Since I actively use these machines in my tasks, I made the layout disassemblable so that I could easily remove the necessary microcomputer, change the microSD in it and run it for other tasks. Pure functionality.
Our plan
Since this is a cluster, at least we need to prepare 2 nodes (worker and manager). What needs to be done:
Prepare machines, set up network and authentication
Set up Docker and CRI-Dockerd on machines
Let's set up a load balancer
Let's add nodes to the cluster
Let's install a cluster dashboard
Our cluster will look like this:
opi-node1.internal 192.168.0.90 Control node
opi-node2.internal 192.168.0.91 Worker node
opi-node3.internal 192.168.0.92 Working node (we will leave it in the settings for future expansion)
192.168.0.95 – virtual IP address (necessary for the load balancer to work)
1. Preparation of nodes
First, let's download the latest Debian image from the official website of the Orange Pi 3 LTS manufacturer using the link:
Then write the image to the device's SD card using your chosen image editor, such as Rufus.
After loading, log in to the system using the default login and password (login: orangepi, password: orangepi).
Let's deal with the network, disable NetworkManager and set up statics, in my case:
for node1
sudo systemctl stop NetworkManager
sudo systemctl disable NetworkManager
sudo tee /etc/network/interfaces <<EOF
source /etc/network/interfaces.d/*
# Network is managed by Network manager
auto lo
iface lo inet loopback
EOF
sudo tee /etc/network/interfaces.d/lan <<EOF
auto eth0
iface eth0 inet static
address 192.168.0.90
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 192.168.0.1
EOF
sudo tee nano /etc/resolv.conf <<EOF
# Generated by NetworkManager
search opi-node1.internal
nameserver 192.168.0.1
EOF
sudo systemctl restart networking
for node2
sudo systemctl stop NetworkManager
sudo systemctl disable NetworkManager
sudo tee /etc/network/interfaces <<EOF
source /etc/network/interfaces.d/*
# Network is managed by Network manager
auto lo
iface lo inet loopback
EOF
sudo tee /etc/network/interfaces.d/lan <<EOF
auto eth0
iface eth0 inet static
address 192.168.0.91
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 192.168.0.1
EOF
sudo tee nano /etc/resolv.conf <<EOF
# Generated by NetworkManager
search opi-node2.internal
nameserver 192.168.0.1
EOF
sudo systemctl restart networking
Let's register repositories on all nodes
sudo nano /etc/apt/sources.list
deb http://deb.debian.org/debian bullseye main contrib non-free
deb http://deb.debian.org/debian bullseye-updates main contrib non-free
deb http://deb.debian.org/debian bullseye-backports main contrib non-free
deb http://security.debian.org/ bullseye-security main contrib non-free
Let's update the cache and the system
sudo apt update
sudo apt upgrade
Let's change the user for security purposes
sudo useradd -s /bin/bash ch
groups
sudo usermod -aG tty,disk,dialout,sudo,audio,video,plugdev,games,users,systemd-journal,input,netdev,ssh ch
sudo passwd ch
sudo passwd orangepi
sudo mkhomedir_helper ch
su ch
# Удаляем пользователя orangepi
sudo deluser --remove-all-files orangepi
Let's set the time
sudo timedatectl set-timezone Europe/Moscow
timedatectl
Let's set up rsa keys for passwordless authentication and write the public key on the machines
ssh-keygen -t rsa
sudo mkdir ~/.ssh/ ; \
sudo touch ~/.ssh/authorized_keys ; \
sudo nano ~/.ssh/authorized_keys
Let's set up node names
for opi-node1.internal
sudo apt install dnsutils -y
sudo hostnamectl set-hostname opi-node1.internal
sudo tee /etc/hosts <<EOF
127.0.0.1 localhost
127.0.1.1 opi-node1.internal opi-node1
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# Cluster nodes
192.168.0.90 opi-node1.internal
192.168.0.91 opi-node2.internal
192.168.0.92 opi-node3.internal
EOF
sudo systemctl restart systemd-hostnamed
sudo hostname opi-node1.internal
for opi-node2.internal
We install the necessary utilities
sudo apt install -y curl wget gnupg sudo iptables tmux keepalived haproxy
Let's configure autoload and launch of the br_netfilter and overlay kernel modules, which are necessary for the network and storage to work on the cluster, and allow IP traffic routing between interfaces. Also, for the correct operation of the cluster nodes, you need to disable the swap file
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
#<------- Разрешение маршрутизации IP-трафика
sudo echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1" | sudo tee /etc/sysctl.d/10-k8s.conf
sudo sysctl -f /etc/sysctl.d/10-k8s.conf
#<------- Отключение файла подкачки
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
For our device, the swap file cannot be disabled via /etc/fstab, so we use workarounds. To do this, make a swapoff -a entry in the /etc/rc.local file before the exit 0 line. In this case, the swap file will be disabled when the system boots.
sudo nano /etc/rc.local
....
swapoff -a
exit 0
We do checks:
#<------- Для проверки автоматической загрузки модулей br_netfilter и overlay
sudo lsmod | grep br_netfilter
sudo lsmod | grep overlay
## Ожидаемый примерный результат:
# br_netfilter 32768 0
# bridge 258048 1 br_netfilter
# overlay 147456 0
#<------- Для проверки успешности изменения настроек в параметрах сетевого стека
sudo sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
# Ожидаемый примерный результат:
# net.bridge.bridge-nf-call-iptables = 1
# net.bridge.bridge-nf-call-ip6tables = 1
# net.ipv4.ip_forward = 1
#<------- Для проверки отключения файла подкачки выполним команду:
sudo swapon -s
## Ожидаемый вывод команды – пустой. Она ничего не должна отобразить
2. Setting up Docker and CRI-Dockerd
Setting up a Kubernetes deb repository
#<------- загрузка ключа репозитория
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
#<------- Установка пакетов kubeadm и kubectl
sudo apt-get install -y kubelet kubeadm kubectl
#<------- Установка Docker
sudo curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg
sudo echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Installing Docker cri-dockerd is necessary for working with containers in a Kubernetes environment.
uname -m
#<------- видим что архитектура нашего устройства ARM64
sudo wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14.arm64.tgz
sudo tar xvf cri-dockerd-*.tgz
sudo mv cri-dockerd/cri-dockerd /usr/local/bin/
sudo wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
sudo wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
sudo mv cri-docker.socket cri-docker.service /etc/systemd/system/
sudo sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
sudo systemctl daemon-reload
sudo systemctl enable cri-docker.service
sudo systemctl enable --now cri-docker.socket
#<------- Проверка доступности сокета cri-dockerd
sudo usermod -aG docker ch
sudo crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock version
## Ожидаемый примерный результат:
# Version: 0.1.0
# RuntimeName: docker
# RuntimeVersion: 23.0.1
# RuntimeApiVersion: v1
3. Configuring the load balancer
The keepalived daemon is needed for the virtual IP address to work and will be the second address on the node's network interface. If the node fails, keepalived will switch the virtual address to another available node. The haproxy daemon processes requests to the cluster control nodes' API server in turn. Don't forget to specify the correct network interface names!
sudo nano etc/keepalived/keepalived.conf
global_defs {
enable_script_security
script_user nobody
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 5
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass ZqSj#f1G
}
virtual_ipaddress {
192.168.0.95
}
track_script {
check_apiserver
}
}
sudo nano /etc/keepalived/check_apiserver.sh
#!/bin/sh
#-ch- from ch-script
# File: /etc/keepalived/check_apiserver.sh
#-
APISERVER_VIP=192.168.0.90
APISERVER_DEST_PORT=8888
PROTO=http
#-
errorExit() {
echo "*** $*" 1>&2
exit 1
}
#-
curl --silent --max-time 2 --insecure ${PROTO}://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET ${PROTO}://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
curl --silent --max-time 2 --insecure ${PROTO}://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET ${PROTO}://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
Let's set the attribute to allow script execution and start the keepalived daemon
sudo chmod +x /etc/keepalived/check_apiserver.sh
sudo systemctl enable keepalived
sudo systemctl start keepalived
Let's set up the haproxy daemon
sudo nano /etc/haproxy/haproxy.cfg
# File: /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
log /dev/log local1 notice
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
bind *:8888
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server opi-node1 192.168.0.90:6443 check
server opi-node2 192.168.0.91:6443 check
server opi-node3 192.168.0.92:6443 check
Let's start the daemon and add it to startup
sudo systemctl enable haproxy
sudo systemctl restart haproxy
Let's check the availability of the cri-dockerd socket
sudo crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock version
## Ожидаемый примерный результат:
# Version: 0.1.0
# RuntimeName: docker
# RuntimeVersion: 23.0.1
# RuntimeApiVersion: v1
Let's reboot all nodes
sudo reboot
4. Adding nodes to the cluster
Launch on the control node
sudo kubeadm init \
--cri-socket unix:///var/run/cri-dockerd.sock \
--pod-network-cidr=10.244.0.0/16 \
--control-plane-endpoint "192.168.0.95:8888" \
--upload-certs
We wait for the settings and if successful, we receive tokens and the necessary certificates. They will look something like this:
for connecting control units
sudo kubeadm join 192.168.0.95:8888 --token zj3j9x.p63c8r2a7vb57cr3 \
--discovery-token-ca-cert-hash sha256:25fc69ce47192e5zcp93746ca20f67ec86dafb39f6161a0e221f53ddebbf8c2 \
--control-plane --certificate-key d30c1cad2fv765zx36d599d198172a11270550e0bc0e6d1e81792ab81b310ec0 \
--cri-socket unix:///var/run/cri-dockerd.sock
for connecting work nodes
sudo kubeadm join 192.168.0.95:8888 --token h04o9e.qnon45rtyy9qhgyo \
--discovery-token-ca-cert-hash sha256:25fc69ce47192e5zcp93746ca20f67ec86dafb39f6161a0e221f53ddebbf8c2 \
--cri-socket unix:///var/run/cri-dockerd.sock
don't forget to add –cri-socket unix:///var/run/cri-dockerd.sock to work with cri-dockerd
If you have lost your token, you can find it out using the command:
kubeadm token create --print-join-command
also, if an error occurs, you can allow reading of the configuration file for all users (reduces security, but if you are the only one working on the server, then this may be a justified measure in case of startup errors)
sudo chmod +r /etc/kubernetes/admin.conf
Setting up kubeсtl environment variables for the control node
sudo sh -c 'echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/environment'
export KUBECONFIG=/etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc
source ~/.bashrc
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Let's install a network plugin on the control node, which is needed to ensure network connectivity and isolation in the cluster.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Let's test the cluster operation on the control node
kubectl get nodes
kubectl get pods -A
kubectl describe node opi-node1.internal
kubectl describe node opi-node2.internal
5. Let's install the cluster dashboard
Let's install Helm on the server. You need to download and install Helm using the following commands:
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
let's add the Kubernetes Dashboard repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
Let's create a namespace kubernetes-dashboard in which Kubernetes Dashboard will be installed.
kubectl create namespace kubernetes-dashboard
Let's install Kubernetes Dashboard from Helm chart using the following command
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --namespace kubernetes-dashboard
let's do some forwarding
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
Let's check if the Kubernetes Dashboard installation was successful, run the commands:
kubectl get pods -n kubernetes-dashboard
you need to wait a few minutes for the dashboard to start
find out the name of the service account
kubectl get sa -n kubernetes-dashboard
we will receive a token
kubectl -n kubernetes-dashboard create token default
we will give the necessary roles to the system user to work with the dashboard
sudo tee dashboard-adminuser.yaml <<EOF
#-ch- from ch-script
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kubernetes-dashboard
and apply the rules after seeing a fully functional dashboard!
kubectl apply -f dashboard-adminuser.yaml
Congratulations, now that you have your cluster set up, you can continue exploring this useful and functional tool at home!
Thank you for your attention!