Deploying Kubernetes on Astra Linux. Part 1

In the modern world, Kubernetes has already become the industry standard for container orchestration and is widely used in many infrastructures. IN our company We also actively use K8s and love it dearly.

Over the years of working with it, we have had many different tasks: from the most typical deployments to extremely specific configurations and installations. However, we recently faced a challenge within our walls that forced us to get creative, go beyond Kubernetes, and even cheat a little.

Hello everyone, Peter here. Today we will look at automatic deployment of vanilla Kubernetes on Astra Linux via Kubespray + Helm.

Part One: Know Your Enemy

First, let's formulate the tasks:

  • Deploy Kubernetes and all basic infrastructure elements of the latest stable version on Astra Linux Special Edition;

  • Deployment should not depend on the type of server: the infrastructure should run both on a hardware server, isolated from the external network, and on a virtual server using cloud resources, for example, a cloud load balancer. The only constant in this equation is Astra Linux;

  • It is assumed that the deployment can take place in a closed loop without direct access to the Internet. The only way to obtain the necessary artifacts is the Image Registry with the necessary images in the same circuit in which Kubernetes is deployed. The images themselves are downloaded in advance at our request through the security service;

  • Deployment should assume portability and automatic scaling: using a button, you can add or remove a node from the cluster, update the cluster to the latest version of Kubernetes on any Astra Linux. The option of manually deploying Kubernetes is not suitable. It is also worth considering that a cluster can only consist of one node. In this case, she will combine the roles of both a master and a worker.

At first glance, the task seems more than typical: assemble the most universal Kubernetes assembly with the necessary applications and roll it out. However, we have to take into account some features of Astra Linux.

So what is this OS? If you look at Astra's page Wikipediathen we will see the following:

There are two available editions of the OS: the main one is called “Special Edition” and the other one is called “Common Edition”. The main differences between the two are the fact that the former is paid, while the latter is free; the former is available for x86-64 architecture, ARM architecture and Elbrus architecture, while the latter is only available for x86-64 architecture; the former has a security certification and provides 3 levels of OS security (which are named after Russian cities and which from the lowest to the highest are: Oryol, Voronezh and Smolensk), while the latter doesn't have the security certification and only provides the lowest level of OS security (Oryol).

What can we understand from this?

  1. There are two main editions of Astra Linux: the free Common Edition and the paid Special Edition, which provides a more secure certified version. Since we will be rolling out on industrial circuits, for the purposes of this article we will only consider the Special Edition;

  2. Essentially, Astra is a distribution based on Debian. In particular, we will generally work with a distribution that founded on Debian 10 Buster. From this we conclude that we have to work with .deb packages and the apt package manager. This allows you to use external Debian repositories to install applications, which, by the way, is described in the official Astra documentation and is not essentially a scam;

  3. An unpleasant fact that was revealed during the deployment process: extended documentation describing common problems is available only in a paid form. When trying to find an answer to even the most banal question, you will most often see the following:

  1. Despite the fact that Astra is based on the familiar Debian, the system is filled with changes that you will have to deal with. At the same time, the entire infrastructure must be configured in such a way as not to weaken the system’s protection, because otherwise this whole undertaking is devoid of any meaning.

All introductory tasks have been received, let's start choosing tools.

Part two: choosing a weapon

In our build we decided to use the following utilities:

  • Kubespray – a tool for automatic deployment of Kubernetes. Under the hood is a collection of Ansible roles, each of which installs a separate K8s element. Kubespray can also install additional tools such as CNI, Storage Provisioner and much more. But we strongly do not recommend doing this. Most of these tools are installed through vanilla manifests, and the application versions may not be the latest (for example, Cilium as a CNI, which is now recommended to be rolled out via Helm chart). At the time of writing, Kubespray installs Cilum version 1.12.0 via manifests, although the latest version is 1.16.0. It's also unfortunate that Kubespray doesn't support Orthodox Astra Linux by default, but we'll make a few changes to the role code to get around this limitation;

  • Cilium — CNI for Kubernetes, which provides tools for secure network operation within an eBPF-based cluster. An important nuance: before using Cilium, check the support of the BGP protocol on your servers. If it is not there (as, for example, in some cloud platforms), then Cilium will not work correctly. We will talk about this in more detail in the next part;

  • Rancher Local Path Provisioner – Storage Provisioner for Kubernetes. Was chosen as a simple Provisioner that provides ReadWriteOnce volumes. Since our installation will take place on one node, this option is the most optimal. This is the only tool that we will install through Kubespray, since installing Rancher Local Provisioner through manifests suits us just fine;

  • Metallb — implementation of a load balancer for metal Kubernetes clusters. Since we don’t know where our cluster will be installed, we need to ensure that we can create services like Load Balancer with a dedicated IP address in clusters that are not run by the cloud provider. Consequently, such services will not be able to connect to the cloud load balancer and get their IP, in which case Metallb will help us.

  • Ingress Nginx is an Ingress controller for Kubernetes that uses NGINX as a reverse proxy;

  • CertManager is a standard tool for issuing SSL certificates. If our installation takes place in a closed loop, then CertManager will issue self-signed certificates for services that require a secure connection. For example, kubernetes-dashboard can only run in protected mode;

  • Vector — a tool for collecting logs, a fast and lightweight solution;

  • Loki — a tool for aggregating logs, we will use a Single Binary installation. Chosen as a universal solution: Elasticsearch beats Loki when it comes to clustering and data storage depth. But we decided that in most cases Loki would be enough for us: it is lighter, simpler and less power-hungry than Elasticsearch. Plus, we will have Grafana installed as part of KubePrometheusStack, and we will build logging dashboards in it. Thus, we maintain the efficiency of logging and do not multiply entities;

  • KubePrometheusStack – a classic system monitoring kit consisting of Prometheus + Grafana + AlertManager.

We will install everything mentioned above through Helm charts, which will be transmitted to a closed loop and located locally in the system.

The tools have been selected, now we move directly to deployment.

Part three: descending into the underworld

To begin with, we decided to check the official information and try to install Kubepray on Astra Linux without any changes. But first we looked at the kernel parameters and the first trouble awaited us: some of the parameters had been changed. This is a problem because, for example, if the parameter sysctl net.ipv4.ip_forward on the Linux host is set to 0 (disabled), IPv4 packet forwarding is disabled. As a result, on Kubernetes nodes, the network in the pods is disrupted, and in different formats: the IPs of other pods or the external network may not be accessible from the pods.

We will not go into detail about all the parameters that can disrupt the Kubernetes network or other elements of it. You can find them in other materials, for example, on the Teleport team blog there is article about network troubleshooting through kernel parameters. Here we present only the final list of parameters in the created /etc/sysctl.d/kubernetes.conf file:

net.ipv4.ip_forward=1
net.ipv4.ip_local_reserved_ports=30000-32767
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0

We also deleted the file /etc/sysctl.d/999-cve-2019-14899.conf, because it contained the rules that were modified above:

net.ipv4.conf.default.rp_filter = 0

We've sorted out the core, let's move on to Kubespray. Download it from the official GitHub repositorychoosing the latest release, not master, thread:

git clone -b release-2.26 git@github.com:kubernetes-sigs/kubespray.git

Then we remember that, in fact, we may not have Internet in the circuit and an offline installation will be necessary. The problem with images is solved in two ways. First: you need to raise a local registry to your taste and upload all the images that are needed for installation into it (or, what will happen more often, upload the images to an existing registry in the circuit). Second: raise a proxy server through which these images will be downloaded.

There should be no problems with uploading images to the registry. Let's go further: in the file kubespray/roles/kubespray-defaults/defaults/main/download.yml we add the path to our proxy or registry to variables like *_image_repo.

Kubernetes binaries are also not a problem. They can be downloaded either locally or from an artifact repository like Nexus. We decided to use the first option to make life easier for those who do not have such storage in their system.

Go to the directory kubespray/contrib/offline and run the generate_list.sh script. At the output we will receive a list of all files that are needed during installation. Download the received files to any directory, for example /var/www/kubernetes/offline and transfer it to the working server via scp, flash drive, floppy disk or any other unpleasant way.

All that remains is to teach our Kubespray to pick up files locally. We can follow the classic path described in the documentation, or we can use a perverted option: set up a local web server and access files directly from localhost. To do this, you need to install Nginx and create the following healthy unhealthy location:

server {
            listen       80;
            server_name  localhost;
            access_log  logs/localhost.access.log  main;
            location / {
                root /var/www/kubernetes/offline;
                index index.html index.htm index.php;
            }
       }

Next we go to the file kubespray/roles/kubespray-defaults/defaults/main/download.yml and change it paths to sites on http://127.0.0.1 in *_url format variables.

Okay, now we’re moving along the standard path: all hosts must have Python version 3.10 or higher. Next, go to the directory with kubespray and install the necessary dependencies for Python:

python3 -m pip -r requirements.txt

Don't forget to configure the installation of additional elements in kubespray/inventory/sample/group_vars/k8s_cluster/addons.ymlwe need to enable Helm and Rancher Local Path Provisioner:

helm_enabled: true
local_path_provisioner_enabled: true
local_path_provisioner_namespace: "local-path-storage"
local_path_provisioner_storage_class: "local-path"
local_path_provisioner_reclaim_policy: Delete
local_path_provisioner_claim_root: /opt/local-path-provisioner/
local_path_provisioner_debug: false
local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner"
local_path_provisioner_image_tag: "v0.0.24"
local_path_provisioner_helper_image_repo: "busybox"
local_path_provisioner_helper_image_tag: "latest"

Since we will be using MetalLB, we configure the arp_ignore parameters in the file kubespray/inventory/sample/group_vars/k8s_cluster/k8s-cluster.ymlto avoid responding to ARP requests from the kube-ipvs0 interface and MetalLB worked:

kube_proxy_strict_arp: true

Here we change the CNI to the desired one if you want to get a working network plugin out of the box:

kube_network_plugin: calico

If you want to install the plugin separately after installation, then set this value to “cni”:

kube_network_plugin: cni

Create an inventory file for further deployment:

cp -rfp inventory/sample inventory/mycluster (создаем директорию для inventory файла из дефолтной директории)
declare -a IPS=(10.10.1.3) (объявляем IP адреса серверов)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} (создаем inventory файл из объявленных адресов)

An important nuance of working with Kubespray: when specifying IP addresses, you cannot use 127.0.0.1, even if we have a single-node installation. Kubespray uses the addresses from declare not only as role deployment destinations, but also as part of the Kubernetes settings to define hosts. If you use localhost as the installation address, then other nodes simply won’t be able to see this node if they appear later.

And finally, deploying the cluster through the role:

ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

And… we get an error OS types are not supported:

Unpleasant, but expected. You can set the parameter allow_unsupported_distribution_setup: true V kubespray/inventory/sample/group_vars/all/all.yml and allow Kubespray to be installed on unfamiliar systems. But this unpredictable tick scared us, and we decided to explicitly tell Kubespray what to do.

And we'll do it this way: knowing that Astra is a system that is actually based on Debian and runs on the same package manager, we can make Kubespray think that Astra Linux is Debian. Let's do that, let's go.

Let's suddenly start with Ansible itself:

  1. Getting the path to the installed Python packages in the directive Location:

python3.10 -m pip show pip

Name: pip
Version: 24.0
Summary: The PyPA recommended tool for installing Python packages.
Home-page:
Author:
Author-email: The pip developers <distutils-sig@python.org>
License: MIT
Location: /usr/local/lib/python3.10/site-packages
Requires:
Required-by:

  1. Go to the file distribution.py with a description of the distributions:

/usr/local/lib/python3.10/site-packages/ansible/modules_utils/facts/system/distribution.py

  1. We find the OS_FAMILY_MAP parameter and brazenly add Astra Linux to it:

# keep keys in sync with Conditionals page of docs
    OS_FAMILY_MAP = {'RedHat': ['RedHat', 'RHEL', 'Fedora', 'CentOS', 'Scientific', 'SLC', 'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS', 'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba', 'EulerOS', 'openEuler', 'AlmaLinux', 'Rocky', 'TencentOS', 'EuroLinux', 'Kylin Linux Advanced Server'],
                     'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon', 'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux', 'Pop!_OS', 'Parrot', 'Pardus GNU/Linux', 'Uos', 'Deepin', 'OSMC', 'Astra Linux'],

Next we move on to Kubespray itself:

  1. Finding the file kubespray/roles/container-engine/containerd/defaults/main.yml and add it to the list containerd_supported_distributions Astra Linux distribution:

containerd_supported_distributions:
  - 'RedHat'
  - 'CentOS'
  - 'Fedora'
  - 'Ubuntu'
  - 'Debian'
  - 'Flatcar'
  - 'Flatcar Container Linux by Kinvolk'
  - 'Suse'
  - 'openSUSE Leap'
  - 'openSUSE Tumbleweed'
  - 'ClearLinux'
  - 'OracleLinux'
  - 'AlmaLinux'
  - 'Rocky'
  - 'Amazon'
  - 'Kylin Linux Advanced Server'
  - 'UnionTech'
  - 'UniontechOS'
  - 'openEuler'
  - 'Astra Linux'

  1. Finding the file kubespray/roles/kubernetes/preinstall/defaults/main.yml and add it to the list debian_os_family_extensions Astra Linux distribution:

supported_os_distributions:
  - 'RedHat'
  - 'CentOS'
  - 'Fedora'
  - 'Ubuntu'
  - 'Debian'
  - 'Flatcar'
  - 'Flatcar Container Linux by Kinvolk'
  - 'Suse'
  - 'openSUSE Leap'
  - 'openSUSE Tumbleweed'
  - 'ClearLinux'
  - 'OracleLinux'
  - 'AlmaLinux'
  - 'Rocky'
  - 'Amazon'
  - 'Kylin Linux Advanced Server'
  - 'UnionTech'
  - 'UniontechOS'
  - 'openEuler'
  - 'Astra Linux'

  1. Finding the file kubespray/roles/kubernetes/preinstall/defaults/main.yml and add it to the list supported_distribution the exact version of our Astra Linux distribution:

debian_os_family_extensions:
- "Astra Linux 1.7.5"

Congratulations, we have shamelessly introduced a new operating system.

However, after launch we encountered a new problem:

TASK [container-engine/containerd : Containerd | Ensure containerd is started and enabled] ***fatal: [k8s]: FAILED! => {"changed": false, "msg": "failure 1 during daemon-reload: Failed to reload daemon: Access denied\n"}
NO MORE HOSTS LEFT *************************************************************
RUNNING HANDLER [kubernetes/preinstall : Preinstall | reload kubelet] **********
fatal: [slt-test-pac]: FAILED! => {"changed": false, "msg": "Unable to start service kubelet: Failed to start kubelet.service: Access denied\nSee system logs and 'systemctl status kubelet.service' for details.\n"}

Kubespray, which was launched via a remote server, did not have enough rights to restart systemd. All attempts to increase privileges were unsuccessful. And then we remembered about the flag for the ansible-playbook command –connection=localwhich allows you to run an ansible role via a local connection and formally via a local user. In fact, this is also a kind of deception of the system, however, since specifically in our case the task was to deploy a single-node solution, it suited us. Don't forget that even in this case we must use the private IP address from the internal network, and not localhost!

After that, everything started correctly, all that remained was to check a few things. First of all, you need to run the command kubectl get nodes and make sure that all nodes are part of the cluster. Then, if the installation is carried out with an emphasis on security, then you need to check that the images were downloaded not from the network, but from your registry:

kubectl describe pod -A | grep "Image:" | grep -v <адрес_registry> | sort | uniq | wc -l

You should get 0 in response.

Finally, let's talk about a funny nuance of installing Kubespray through the official Docker image. On home page Kubespray repository has a recommendation to install Kubernetes not through the repository, but through a pre-built docker image with Kubespray and all the necessary libraries. Essentially, all we need is to download the repository to obtain the configs, pull the required image and run it with the configs and SSH server keys:

git checkout v2.26.0

docker pull quay.io/kubespray/kubespray:v2.26.0

docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \  --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \  quay.io/kubespray/kubespray:v2.26.0 bash#

(Внутри контейнера выполянем) playbooks:ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml

Looks comfortable. But there is a nuance that is often forgotten: when installing Kubespray, one of the initial steps is removing all container runtimes from servers. Therefore, if you decide to choose this Kubernetes installation path, then you need to roll out from a server that is not part of the future K8s cluster. Otherwise, you will find yourself in a situation in which (for a completely unknown reason) your installation will end at the very beginning, and the container with Kubespray will mysteriously disappear from the server.


This completes the first part. We analyzed the automated raising of Kubernetes through the patched Kubespray. In the next article we will analyze the installation of the main elements of the system through local Helm charts and will discuss setting up the network. See you!

By the way, subscribe to our social networks: Telegram, vc.ru And YouTube. Various interesting content is published everywhere.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *