k0s: Kubernetes in a single binary

In our new translated article We give a brief overview of the new Kubernetes distribution. We hope the article will be interesting for the readers of Habr.

A couple of days ago, a friend told me about the new Kubernetes distribution from Mirantis entitled k0s… We all know and love K8s, right? We were also conquered K3s, lightweight Kubernetes developed by Rancher Labs and transmitted CNCF some time ago. It’s time to discover the new k0s distribution!

After a brief introduction to k0s, we will create a cluster of three nodes by following these steps:

  • Provisioning three virtual machines (Multipass In action)
  • Installing k0s on each of them
  • Setting up a simple k0s cluster configuration file
  • Cluster initialization
  • Gaining access to the cluster
  • Adding worker nodes
  • Add user

What is k0s?

k0s is the newest Kubernetes distribution. The current release is 0.8.0. It was published in December 2020, and the first commit of the entire project happened in June 2020.

k0s is shipped as a single binary without any OS dependencies. Thus, it is defined as a Kubernetes distribution with the characteristics zero-friction / zero-deps / zero-cost (easy to configure / no dependencies / free).

Latest k0s release:

  • Delivers Certified (Internet Security Center Certified) Kubernetes 1.19
  • Uses containerd as the default container runtime
  • Supports Intel (x86-64) and ARM (ARM64) architectures
  • Uses intra-cluster etcd
  • Uses the network plugin by default Calico (thereby activating network policies)
  • Includes Pod Security Policy Access Controller
  • Uses DNS with CoreDNS
  • Provides cluster metrics via Metrics Server
  • Enables horizontal pod autoscale (HPA).

A lot of cool features will come in future releases, including:

  • Compact VM runtime (I look forward to testing this feature)
  • Zero Downtime Cluster Upgrade
  • Cluster backup and recovery

Impressive, isn’t it? Next, we’ll look at how to use k0s to deploy a 3-node cluster.

Preparing virtual machines

To begin with, we will create three virtual machines, each of which will be a node in our cluster. In this article I will go the quick and easy way and use an excellent tool Multipass (love it) for provisioning local virtual machines on macOS.

The following commands create three instances of Ubuntu on xhyve. Each virtual machine has 5 GB of disk, 2 GB of RAM and 2 virtual processors (vCPU):

for i in 1 2 3; do
multipass launch -n node$i -c 2 -m 2G
done

We can then display a list of virtual machines to make sure they are all working fine:

$ multipass list
Name State IPv4 Image
node1 Running 192.168.64.11 Ubuntu 20.04 LTS
node2 Running 192.168.64.12 Ubuntu 20.04 LTS
node3 Running 192.168.64.13 Ubuntu 20.04 LTS

Next, we will install k0s on each of these nodes.

Installing the latest k0s release

The latest release of k0s can be downloaded from GitHub repository

It has a convenient installation script:

curl -sSLf get.k0s.sh | sudo sh

We use this script to install k0s on all of our nodes:

for i in 1 2 3; do
multipass exec node$i --bash -c "curl -sSLf get.k0s.sh | sudo sh"
done

The above script sets k0s to / user / bin / k0… To get all the available commands, you run the binary with no arguments.


Available k0s commands

We can check the current version:

$ k0s version
v0.8.0

We will use some of the commands in the next steps.

Creating a configuration file

First, you need to define a configuration file that contains the information k0s needs to create a cluster. On node1 we can execute the command default-configto get the complete default configuration.

ubuntu@node1:~$ k0s default-config
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
address: 192.168.64.11
sans:
- 192.168.64.11
- 192.168.64.11
extraArgs: {}
controllerManager:
extraArgs: {}
scheduler:
extraArgs: {}
storage:
type: etcd
kine: null
etcd:
peerAddress: 192.168.64.11
network:
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
provider: calico
calico:
mode: vxlan
vxlanPort: 4789
vxlanVNI: 4096
mtu: 1450
wireguard: false
podSecurityPolicy:
defaultPolicy: 00-k0s-privileged
workerProfiles: []
extensions: null
images:
konnectivity:
image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent
version: v0.0.13
metricsserver:
image: gcr.io/k8s-staging-metrics-server/metrics-server
version: v0.3.7
kubeproxy:
image: k8s.gcr.io/kube-proxy
version: v1.19.4
coredns:
image: docker.io/coredns/coredns
version: 1.7.0
calico:
cni:
image: calico/cni
version: v3.16.2
flexvolume:
image: calico/pod2daemon-flexvol
version: v3.16.2
node:
image: calico/node
version: v3.16.2
kubecontrollers:
image: calico/kube-controllers
version: v3.16.2
repository: ""
telemetry:
interval: 10m0s
enabled: true

Among other things, this allows us to determine:

  • API Server, Controller Manager, and Scheduler Launch Options
  • Storage that can be used to store information about the cluster (etcd)
  • Network plugin and its configuration (Calico)
  • Version of container images with management components
  • Some additional management schemes to deploy when starting a cluster

We could save this configuration to a file and adapt it to our needs. But in this article we will use a very simple configuration and save it in /etc/k0s/k0s.yaml

apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
address: 192.168.64.11
sans:
- 192.168.64.11
network:
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12

Note: Since we are initializing the cluster on the node node1, this node will serve the API server. This host’s IP address is used in api.address and api.sans (subject alternate names) in the above config file. If we had additional master nodes and a load balancer above them, we would also use in the settings api.sans The IP address of each host and load balancer (or corresponding domain name).

Cluster initialization

First, we create a systemd unit at node1 to control k0s.

[Unit]
Description="k0s server"
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/k0s server -c /etc/k0s/k0s.yaml --enable-worker
Restart=always

The main command is listed here in ExecStart; it starts the k0s server with the configuration we saved to our file in the previous step. We also specify the parameter –enable-workerso that this first master node also functions as a worker.

Then we copy this file to /lib/systemd/system/k0s.service, reboot systemd and start the newly created service.

ubuntu@node1:~$ sudo systemctl daemon-reload
ubuntu@node1:~$ sudo systemctl start k0s.service

For the sake of curiosity, you can check the processes started by the k0s server:

ubuntu@node1:~$ sudo ps aux | awk ‘{print $11}’ | grep k0s
/usr/bin/k0s
/var/lib/k0s/bin/etcd
/var/lib/k0s/bin/konnectivity-server
/var/lib/k0s/bin/kube-controller-manager
/var/lib/k0s/bin/kube-scheduler
/var/lib/k0s/bin/kube-apiserver
/var/lib/k0s/bin/containerd
/var/lib/k0s/bin/kubelet

From the output above, we can see that all major components are running (kube-apiserver, kube-controller-manager, kube-scheduler and so on), as well as components common to the main and work nodes (containerd, kubelet). k0s is responsible for managing all of these components.

Now we have a cluster of 1 node. In the next step, we will see how to access it.

Gaining access to the cluster

First, we need to get the file kubeconfiggenerated during cluster creation; it was created on node1 in /var/lib/k0s/pki/admin.conf… This file should be used to configure kubectl on the local machine.

First we get kubeconfig cluster of node1:

# Get kubeconfig file
$ multipass exec node1 cat /var/lib/k0s/pki/admin.conf > k0s.cfg

Next, we replace the internal IP address with the external IP address node1:

# Replace IP address
$ NODE1_IP=$(multipass info node1 | grep IP | awk '{print $2}')
sed -i '' "s/localhost/$NODE1_IP/" k0s.cfg

Then we configure our local kubectl client to communicate with the k0s API server:

export KUBECONFIG=$PWD/k0s.cfg

Surely one of the first commands we run when we enter a new cluster is the one that displays a list of all available nodes – let’s try:

$ kubectl get no
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 78s v1.19.4

There is nothing surprising here. After all node1 is not only the main, but also the working node of our first cluster thanks to the flag –enable-workerwhich we specified in the launch command. Without this flag node1 would only be working and would not appear in the list of nodes here.

Adding worker nodes

To add to the cluster node2 and node3, first we need to create a connection token from node1 (This is a fairly common step as it is used in Docker Swarm and Kubernetes clusters built with kubeadm).

$ TOKEN=$(k0s token create --role=worker)

The above command generates a long (very long) token. Using it, we can attach to the cluster node2 and node3:

ubuntu@node2:~$ k0s worker $TOKEN
ubuntu@node3:~$ k0s worker $TOKEN

Note: In a real cluster, we would use systemd (or some other supervisor) to manage k0s processes for the worker nodes, as we did for the master node.

Our three-node cluster is up and running, as we can verify by displaying the list of nodes and listing the nodes again:

$ kubectl get no
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 30m v1.19.4
node2 Ready <none> 35s v1.19.4
node3 Ready <none> 32s v1.19.4

We can also check pods that work in all namespaces:


List of pods running in the cluster in all namespaces

There are a few things to note here:

  • As usual, we see pods kube-proxy, pods of network plugins (based on Calico), as well as pods for CoreDNS.
  • Pods api-server, scheduler and controller-manager do not appear in this list because they run as normal processes and not inside pods.

Add user

K0s version 0.8.0 contains a subcommand user… This allows you to create kubeconfig for an additional user / group. For example, the following command creates a file for the new user kubeconfig with the title demowhich is inside an imaginary group named development

Note: In Kubernetes, users and groups are managed by an administrator outside the cluster, which means there is no user-not-group resource in K8s.

$ sudo k0s user create demo --groups development > demo.kubeconfig

For a better understanding, we will extract the client certificate from this file kubeconfig and decode it from base64 representation:

$ cat demo.kubeconfig | grep client-certificate-data | awk '{print $2}' | base64 --decode > demo.crt

Then we use the command openssl to get the content of the certificate:

ubuntu@node1:~$ openssl x509 -in demo.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
71:8b:a4:4d:be:76:70:8a:...:07:60:67:c1:2d:51:94
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes-ca
Validity
Not Before: Dec 2 13:50:00 2020 GMT
Not After : Dec 2 13:50:00 2021 GMT
Subject: O = development, CN = demo
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:be:87:dd:15:46:91:98:eb:b8:38:34:77:a4:99:
da:4b:d6:ca:09:92:f3:29:28:2d:db:7a:0b:9f:91:
65:f3:11:bb:6c:88:b1:8f:46:6e:38:71:97:b7:b5:
9b:8d:32:86:1f:0b:f8:4e:57:4f:1c:5f:9f:c5:ee:
40:23:80:99:a1:77:30:a3:46:c1:5b:3e:1c:fa:5c:

  • Property issuer – this kubernetes-ca, which is the certification authority of our k0s cluster.
  • Subject – this O = development, CN = demo; this part is important as this is where the user’s name and group comes in. Since the certificate is signed by the cluster CA, the plugin on api-server can authenticate user / group by common name (CN) and organization (O) in the subject of the certificate.

First, we instruct kubectl to use the context defined in this new file kubeconfig:

$ export KUBECONFIG=$PWD/demo.kubeconfig

Then once again we display the list of nodes and enumerate the cluster nodes:

$ kubectl get no
Error from server (Forbidden): nodes is forbidden: User “demo” cannot list resource “nodes” in API group “” at the cluster scope

This error message was expected. Even api-server has identified the user (the certificate sent along with the user request was signed by the cluster CA), he is not allowed to perform any actions on the cluster.

Additional rights can be easily added by creating Role/ClusterRole and linking them to the user using RoleBinding/ClusterRoleBinding, but I leave this task as an exercise for the reader.

Conclusion

k0s is definitely worth considering. This approach, when a single binary file manages all processes, is very interesting.

This article provides only a brief overview of k0s, but I will definitely track its development and devote future articles to this new and promising Kubernetes distribution. Some of the future features seem really promising, and I look forward to testing them out.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *