Creating a Kubernetes cluster on your fingers or why it’s not difficult

Hello, my name is Ruslan, I am an enthusiast of one department of artificial intelligence, I am engaged in automating the development process and controlling the infrastructure inside Kubernetes. I would like to examine in detail the deployment of the Kubernetes cluster, show solutions to possible errors, the answers to which had to be searched for quite a long time. After finishing the article, you will know how to create a cluster that will suit almost any task.

Stack used

  • 3x VM Ubuntu 20.04 (cloud).

  • Kube* == 1.23.3.

  • docker containerd.

  • Flannel is a container network interface that assigns IP addresses to Pods for their interaction with each other.

  • MetalLB – LoadBalancer, which will be used to issue external IP addresses from the pool we specified.

  • Ingress NGINX Controller – A controller for Ingress records used by NGINX as a reverse proxy and load balancer.

  • Helm is a tool to install/update even the most complex application in Kubernetes in one click.

  • NFS Subdir External Provisioner is a tool installed in Kubernetes like a normal Deployment that uses an existing and already configured NFS server to dynamically create and centrally store PersistentVolume.

Initial setup

First, let’s prepare the system for installing Kubernetes, disable swap to avoid uncontrolled consequences. Most of the Container Network Interfaces, including Flannel, work directly with iptables, so we will include options that are responsible for sending packets from the bridge directly to iptables for processing.

sudo su;
ufw disable;
swapoff -a; sed -i '/swap/d' /etc/fstab;
cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Installing Docker and Kubernetes

{
  apt install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
  curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
  add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  apt update
  apt install -y docker-ce containerd.io
}

{
  curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
  apt update && apt install -y kubeadm=1.23.3-00 kubelet=1.23.3-00 kubectl=1.23.3-00
}

It is important to know

Before we start creating a cluster, I want to warn against possible problems, keep in mind that Flannel uses the network to assign Pods 10.244.0.0/16so the parameter will be added on creation --pod-network-cidr=10.244.0.0/16.

If for some reason you need to change the network for Pods, then use your own, but do not forget to change the network in the Flannel configuration itself, the solution will be in “Nuances in Flannel”.

To avoid the mistake curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.related due to the different cgroupdriver used by Kubelet and Docker.

sudo mkdir -p /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet

Create a cluster

On the machine that will be the master node, we write the command to create a cluster.

kubeadm init --pod-network-cidr=10.244.0.0/16

To access the command kubectl write commands to move the config to the home directory.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

To add other VMs, we write a command to create a token. The output from the command is entered on other machines.

kubeadm token create --print-join-command

#Примерный вывод - kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443

Since the master node has a label by default NoSchedulewhich does not allow Pods to run without this label, which will prevent us from deploying further DaemonSets, so we will remove the label from the node.

kubectl get nodes # Узнаем название master ноды
kubectl taint nodes , node-role.kubernetes.io/master:NoSchedule-

Installing Flannel and MetalLB

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

Next, you need to specify a pool of IP addresses, MetalLB will use them for services that need External-IP. Copy the code below, replace the address and apply the command kubectl apply -f <название файла>.yaml.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.119.0.15/32 # Локальный адрес одной из нод

PS I specify the local address of one of my worker nodes, the interface on which this address is assigned is also the Internet access, after that you can create a DNS record and connect via the domain.

Nuances in Flannel

Let’s get back to how to change Flannel’s address pool. To do this, you need to download the Flannel config, go into it, find net-conf.json, replace with your address, then you can apply.

wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Flannel config
Flannel config

If you decide to do this after installation, then even after resetting the cluster, Flannel will not allow you to change the interface address, you probably encountered an error NetworkPlugin cni failed to set up pod "xxxxx" network: failed to set bridge addr: "cni0" already has an IP address different from10.x.x.xthis happened because the old interfaces are still left, to fix this, remove the interfaces for everyone nodes.

sudo su
ip link set cni0 down && ip link set flannel.1 down 
ip link delete cni0 && ip link delete flannel.1
systemctl restart docker && systemctl restart kubelet

Helm Installation

The easiest installation of the entire article.
PS Always check scripts.

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Installing Ingress NGINX Controller

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm show values ingress-nginx/ingress-nginx > values.yaml
kubectl create ns ingress-nginx

In values.yaml we change the parameters hostNetwork, hostPort to true, kind to DaemonSet and apply.

values.yaml
values.yaml
values.yaml
values.yaml
helm install ingress ingress-nginx/ingress-nginx -n ingress-nginx --values values.yaml

Installing NFS Subdir External Provisioner

To install, you need a deployed NFS server, in my case it is located on one of the worker nodes. Data from PersistentVolume will be saved to this server, I advise you to think about backups.
Input data: 10.119.0.17 – NFS server IP address, /opt/kube/data – network storage directory. On other machines (not the NFS server), you need to download the package nfs-common to be able to access the repository.

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=10.119.0.17 \
    --set nfs.path=/opt/kube/data

Making NFS Provisioner’s StorageClass the default class for easy creation of PersistentVolumeClaim without specifying StorageClassName.

kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

We check the performance of NFS Provisioner by creating a basic PersistentVolumeClaim, apply it.

cat <<EOF | sudo tee testpvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
EOF
kubectl apply -f testpvc.yaml
kubectl get pv

If in the field Status written Boundand a new folder appeared in the storage directory on the NFS server, then everything went well.

Forwarding TCP / UDP services using Ingress NGINX Controller

Regular Ingress does not support TCP or UDP for forwarding services to the outside. For this reason, there are flags in the Ingress NGINX Controller --tcp-services-configmap and --udp-services-configmap, which will help forward the whole service using the described ConfigMap. The example below shows how to forward a TCP service, where 1111 – forwarded port; prod – the name of the namespace; lhello – service name; 8080 – service port.

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  1111: "prod/lhello:8080"

If TCP / UDP forwarding is used, then these ports must also be opened in the ingress-ingress-nginx-controller service, for this we write a command to edit the service.

kubectl edit service/ingress-ingress-nginx-controller -n ingress-nginx

We add our new port that we want to open and save.

###...значения опущены...
spec:
  type: LoadBalancer
  ports:

    - name: proxied-tcp-1111
      port: 1111
      targetPort: 1111
      protocol: TCP

And the last thing for forwarding is to specify the ConfigMap that will be used, for this we will add a flag to the controller’s DaemonSet.

kubectl edit daemonset.apps/ingress-ingress-nginx-controller -n ingress-nginx
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
DaemonSet Configuration
DaemonSet Configuration

Results

At this point, the cluster is ready to go, you can deploy anything, only certificates for sites are missing, but the solution is already there. Don’t forget to put an annotation in the Ingress of the recordkubernetes.io/ingress.class: "nginx". I will be glad to any feedback and advice on how to improve the infrastructure. Bye everyone!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *