Kubernetes the hard way

Hi all. My name is Good Cat Telegram.

From the FR-Solutions team and supported by @irbgeo Telegram : We continue the series of articles about K8S.

Purposes of this article:

  1. Update the kubernetes deployment order described by everyone Kelsey Hightower.

  2. To prove that “kubernetes is only 5 binaries” and “kubernetes is simple” is an incorrect judgment.

  3. Add key keeper to the kubernetes configuration to manage certificates.

What is Kubernetes made of?

We all remember the joke “kubernetes is only 5 binaries”:

  1. etcd

  2. kube-apiserver

  3. kube-controller-manager

  4. kube-scheduler

  5. kubelet

But, if we operate only with them, then you will not assemble a cluster. Why?

kubelet requires additional components to work:

  1. Container Runtime Interface – CRI (containerd, cri-o, docker, etc.).

CRI – for work it is required:

  1. RUNC library for working with containers.

certificates:

  1. (cfssl, kubeadm, key-keeper) are required to issue certificates.

Other:

  1. kubectl (for working with kubernetes) – optional

  2. crictl (for convenient work with CRI) – optional

  3. etcdctl (for working with etcd on masters) – optional

  4. kubeadm (for cluster setup) – optional

Thus, to deploy kubernetes, a minimum of 8 binaries is required.

Steps to create a K8S cluster

  1. Creation of linux machines on which the control-plane of the cluster will be deployed.

  2. Setting up the operating system on the created linux machines:

    1. installation of base packages (for linux maintenance).

    2. working with modprobe.

    3. working with sysctls.

    4. installation of the binaries required for the functioning of the cluster.

    5. preparation of configuration files for installed components.

  3. Preparing Vault storage.

  4. Generating static-pod manifests.

  5. Checking the availability of the cluster.

As you can see, only 5 stages – nothing complicated)

Well, let’s get started!

one) We create 3 Nodes for the master and bind DNS names to them by mask:

master-${INDEX}.${CLUSTER_NAME}.${BASE_DOMAIN}

** IMPORTANT: ${INDEX} should start with 0 due to the implementation of index formation in the terraform module for VAULT, but more on that later.

environments
## RUN ON EACH MASTER.
## REQUIRED VARS: 
export BASE_DOMAIN=dobry-kot.ru
export CLUSTER_NAME=example
export BASE_CLUSTER_DOMAIN=${CLUSTER_NAME}.${BASE_DOMAIN}

# Порты для ETCD
export ETCD_SERVER_PORT="2379"
export ETCD_PEER_PORT="2380"
export ETCD_METRICS_PORT="2381"

# Порты для KUBERNETES
export KUBE_APISERVER_PORT="6443"
export KUBE_CONTROLLER_MANAGER_PORT="10257"
export KUBE_SCHEDULER_PORT="10259"

# Установите значение 1, 3, 5
export MASTER_COUNT=1

# Для Kube-apiserver 
export ETCD_SERVERS=$(echo \
$(for INDEX in `seq 0 $(($MASTER_COUNT-1))`; \
do \
echo https://master-${INDEX}.${BASE_CLUSTER_DOMAIN}:${ETCD_SERVER_PORT} ; \
done) | 
sed "s/,//" | 
sed "s/ /,/g")

# Для формирования ETCD кластера
export ETCD_INITIAL_CLUSTER=$(echo \
$(for INDEX in `seq 0 $(($MASTER_COUNT-1))`; \
do \
echo master-${INDEX}.${BASE_CLUSTER_DOMAIN}=https://master-${INDEX}.${BASE_CLUSTER_DOMAIN}:${ETCD_PEER_PORT} ; \
done) | 
sed "s/,//" | 
sed "s/ /,/g")


export KUBERNETES_VERSION="v1.23.12"
export ETCD_VERSION="3.5.3-0"
export ETCD_TOOL_VERSION="v3.5.5"
export RUNC_VERSION="v1.1.3"
export CONTAINERD_VERSION="1.6.8"
export CRICTL_VERSION=$(echo $KUBERNETES_VERSION | 
sed -r 's/^v([0-9]*).([0-9]*).([0-9]*)/v\1.\2.0/')

export BASE_K8S_PATH="/etc/kubernetes"

export SERVICE_CIDR="29.64.0.0/16"
# Не обижайтесь - regexp сами напишите)
export SERVICE_DNS="29.64.0.10"

export VAULT_MASTER_TOKEN="hvs.vy0dqWuHkJpiwtYhw4yPT6cC"
export VAULT_SERVER="http://193.32.219.99:9200/"

export VAULT_MASTER_TOKEN="root"
export VAULT_SERVER="http://master-0.${CLUSTER_NAME}.${BASE_DOMAIN}:9200/"

If you have read the documentation from Kelsey Hightower, you noticed that the configuration files are based on the ip addresses of the nodes. This approach is working, but less functional, for ease of maintenance and further templating, it is better to use the FQDN masks known to us in advance, as I indicated for the wizards above.

2) Download all the binaries required by the K8S cluster.

  • In this setup, I will not use RPM or DEB packages to try to show in detail what the whole installation consists of.

download components
## RUN ON EACH MASTER.
wget -O /usr/bin/key-keeper   "https://storage.yandexcloud.net/m.images/key-keeper-T2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=YCAJEhOlYpv1GRY7hghCojNX5%2F20221020%2Fru-central1%2Fs3%2Faws4_request&X-Amz-Date=20221020T123413Z&X-Amz-Expires=2592000&X-Amz-Signature=138701723B70343E38D82791A28AD1DB87040677F7C94D83610FF26ED9AF1954&X-Amz-SignedHeaders=host"
wget -O /usr/bin/kubectl       https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64/kubectl
wget -O /usr/bin/kubelet       https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64/kubelet
wget -O /usr/bin/kubeadm       https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64/kubeadm
wget -O /usr/bin/runc          https://github.com/opencontainers/runc/releases/download/${RUNC_VERSION}/runc.amd64
wget -O /tmp/etcd.tar.gz       https://github.com/etcd-io/etcd/releases/download/${ETCD_TOOL_VERSION}/etcd-${ETCD_TOOL_VERSION}-linux-amd64.tar.gz
wget -O /tmp/containerd.tar.gz https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
wget -O /tmp/crictl.tar.gz     https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz

chmod +x /usr/bin/key-keeper 
chmod +x /usr/bin/kubelet 
chmod +x /usr/bin/kubectl 
chmod +x /usr/bin/kubeadm
chmod +x /usr/bin/runc

mkdir -p /tmp/containerd
mkdir -p /tmp/etcd

tar -C "/tmp/etcd"        -xvf /tmp/etcd.tar.gz
tar -C "/tmp/containerd"  -xvf /tmp/containerd.tar.gz
tar -C "/usr/bin"         -xvf /tmp/crictl.tar.gz

cp /tmp/etcd/etcd*/etcdctl /usr/bin/
cp /tmp/containerd/bin/*   /usr/bin/

3) Creation of services:

There are only 3 services in our installation (key-keeper, kubelet, containerd)

containerd.service
## RUN ON EACH MASTER.
## SETUP SERVICE FOR CONTAINERD

cat <<EOF > /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF
key-keeper.service
## RUN ON EACH MASTER.
## SETUP SERVICE FOR KEY-KEEPER
cat <<EOF > /etc/systemd/system/key-keeper.service
[Unit]
Description=key-keeper-agent

Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/bin/key-keeper -config-dir ${BASE_K8S_PATH}/pki -config-regexp .*vault-config 

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF
kubelet.service
## RUN ON EACH MASTER.
## SETUP SERVICE FOR KUBELET
cat <<EOF > /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target


[Service]
ExecStart=/usr/bin/kubelet

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF
kubelet.d/conf
## RUN ON EACH MASTER.
## SETUP SERVICE-CONFIG FOR KUBELET

mkdir -p /etc/systemd/system/kubelet.service.d

cat <<EOF > /etc/systemd/system/kubelet.service.d/10-fraima.conf
[Service]
EnvironmentFile=-${BASE_K8S_PATH}/kubelet/service/kubelet-args.env

ExecStart=
ExecStart=/usr/bin/kubelet \
\$KUBELET_HOSTNAME \
\$KUBELET_CNI_ARGS \
\$KUBELET_RUNTIME_ARGS \
\$KUBELET_AUTH_ARGS \
\$KUBELET_CONFIGS_ARGS \
\$KUBELET_BASIC_ARGS \
\$KUBELET_KUBECONFIG_ARGS
EOF
kubelet-args.env
## RUN ON EACH MASTER.
## SETUP SERVICE-CONFIG FOR KUBELET

mkdir -p  ${BASE_K8S_PATH}/kubelet/service/

cat <<EOF > ${BASE_K8S_PATH}/kubelet/service/kubelet-args.env
KUBELET_HOSTNAME=""
KUBELET_BASIC_ARGS="
    --register-node=true
    --cloud-provider=external
    --image-pull-progress-deadline=2m
    --feature-gates=RotateKubeletServerCertificate=true
    --cert-dir=/etc/kubernetes/pki/certs/kubelet
    --authorization-mode=Webhook
    --v=2
"
KUBELET_AUTH_ARGS="
    --anonymous-auth="false"
"
KUBELET_CNI_ARGS="
    --cni-bin-dir=/opt/cni/bin
    --cni-conf-dir=/etc/cni/net.d
    --network-plugin=cni
"
KUBELET_CONFIGS_ARGS="
    --config=${BASE_K8S_PATH}/kubelet/config.yaml
    --root-dir=/var/lib/kubelet
    --register-node=true
    --image-pull-progress-deadline=2m
    --v=2
"
KUBELET_KUBECONFIG_ARGS="
    --kubeconfig=${BASE_K8S_PATH}/kubelet/kubeconfig
"
KUBELET_RUNTIME_ARGS="
    --container-runtime=remote
    --container-runtime-endpoint=/run/containerd/containerd.sock
    --pod-infra-container-image=k8s.gcr.io/pause:3.6
"
EOF

** Please note that if you will deploy K8S in the cloud in the future and integrate it with it, then set –cloud-provider=external

*** A useful feature may be the automatic labeling of the node when registering in the cluster
--node-labels=node.kubernetes.io/master,foo=bar

Below is a list of available system labels that you can change:
kubelet.kubernetes.io
node.kubernetes.io
beta.kubernetes.io/arch,
beta.kubernetes.io/instance-type,
beta.kubernetes.io/os,
failure-domain.beta.kubernetes.io/region,
failure-domain.beta.kubernetes.io/zone,
kubernetes.io/arch,
kubernetes.io/hostname,
kubernetes.io/os,
node.kubernetes.io/instance-type,
topology.kubernetes.io/region,
topology.kubernetes.io/zone

For example, you cannot set a system label that is not from the list:
--node-labels=node-role.kubernetes.io/master

four) Preparing Vault.

As we wrote earlier, we will create certificates through the centralized Vault storage.

For example, we will place the pivot vault server on the master-0 in mode dev with an already open storage and a default token for convenience.

vault
## RUN ON MASTER-0.
export VAULT_VERSION="1.12.1"
export VAULT_ADDR=${VAULT_SERVER}
export VAULT_TOKEN=${VAULT_MASTER_TOKEN}

wget -O /tmp/vault_${VAULT_VERSION}_linux_amd64.zip https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip
unzip /tmp/vault_${VAULT_VERSION}_linux_amd64.zip -d /usr/bin
## RUN ON MASTER-0.
cat <<EOF > /etc/systemd/system/vault.service
[Unit]
Description=Vault secret management tool
After=consul.service


[Service]
PermissionsStartOnly=true
ExecStart=/usr/bin/vault server -log-level=debug -dev -dev-root-token-id="${VAULT_MASTER_TOKEN}" -dev-listen-address=0.0.0.0:9200
Restart=on-failure
LimitMEMLOCK=infinity

[Install]
WantedBy=multi-user.target
EOF
## RUN ON MASTER-0.
#enable Vault PKI secret engine 
vault secrets enable -path=pki-root pki

#set default ttl
vault secrets tune -max-lease-ttl=87600h pki-root

#generate root CA
vault write -format=json pki-root/root/generate/internal \
common_name="ROOT PKI" ttl=8760h

*Please note that if you are located in Russia, you will have problems accessing Vault and Terraform downloads.

**pki-root/root/generate/internal – Indicates that a CA will be formed, and only the public key will arrive in response, the private key will be closed.

*** pki-root – the base name of the safe for Root-CA, the change is made through the customization of the terraform module, which we will discuss below.

**** This vault installation is deployed as an overview and cannot be used for production workload.

Great, we have deployed Vault, now we need to prepare the roles, policies and accesses in it for key-keeper.

To do this, we use our module for Terraform.

Terraform
## RUN ON MASTER-0.
export TERRAFORM_VERSION="1.3.4"

wget -O /tmp/terraform_${TERRAFORM_VERSION}_linux_amd64.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip

unzip /tmp/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /usr/bin
## RUN ON MASTER-0.
mkdir terraform

cat <<EOF > terraform/main.tf
terraform {
  required_version = ">= 0.13"

}

provider "vault" {
    
    address = "http://127.0.0.1:9200/"
    token = "${VAULT_MASTER_TOKEN}"
}


variable "master-instance-count" {
  type = number
  default = 1
}

variable "base_domain" {
  type = string
  default = "${BASE_DOMAIN}"
}

variable "cluster_name" {
  type = string
  default = "${CLUSTER_NAME}"
}

variable "vault_server" {
  type = string
  default = "http://master-0.${BASE_CLUSTER_DOMAIN}:9200/"
}

# Данный модуль генерит весь набор переменных, 
# который потребуется в следующих статьях и модулях.
module "k8s-global-vars" {
    source = "git::https://github.com/fraima/kubernetes.git//modules/k8s-config-vars"
    cluster_name          = var.cluster_name
    base_domain           = var.base_domain
    master_instance_count = var.master-instance-count
    vault_server          = var.vault_server
}

# Тут происходит вся магия с Vault.
module "k8s-vault" {
    source = "git::https://github.com/fraima/kubernetes.git//modules/k8s-vault"
    k8s_global_vars   = module.k8s-global-vars
}
EOF
cd terraform 
terraform init --upgrade
terraform plan
terraform apply

The basic set of combat cluster content Vault includes:

  1. Safes for etcd, kubernetes, frotend-proxy. (* Safes for PKI are created by masks):

    1. clusters/${CLUSTER_NAME}/pki/etcd

    2. clusters/${CLUSTER_NAME}/pki/kubernetes-ca

    3. clusters/${CLUSTER_NAME}/pki/front-proxy

  2. Key Value safe for secrets

    1. clusters/${CLUSTER_NAME}/kv/

  3. Roles for ordering certificates (links lead to the description of the certificate)

    1. ETCD:

      1. etcd-client

      2. etcd-server (not used in this installation)

      3. etcd-peer

    2. Kubernetes-ca:

      1. bootstrappers-client (not used in this installation)

      2. kube-controller-manager-client

      3. kube-controller-manager-server

      4. kube-apiserver-kubelet-client **

      5. kubeadm-client (used as cluster-admin in this installation)

      6. kube-apiserver-cluster-admin-client *** (not used in this installation)

      7. kube-apiserver

      8. kube-scheduler-server

      9. kube-scheduler-client

      10. kubelet-peer-k8s-certmanager (Not used in this installation)

      11. kubelet-server

      12. kubelet-client

    3. Front proxy:

      1. front-proxy-client

  4. Access policies to roles from Clause 2

  5. Approly for customer access.

    1. The path to Apple is formed by the mask – clusters/${CLUSTER_NAME}/approle

    2. The name Apple is formed by the mask – ${CERT_ROLE}-${MASTER_NAME}

  6. Temporary tokens.

  7. Encryption keys for signing jwt tokens from service accounts.

** Certificate kube-apiserver-kubelet-client in all installations usually has privileges cluster-adminin this situation, by default, it has no rights and will require the creation of a ClusterRolebinding to work correctly with node kubelets, but more on that later (see the end of the article in the block Examination).

*** kubeadm-client By default, it has cluster-admin rights. In this installation, it will be used as an administrator access client for initial cluster setup.

5) Let’s start generating configuration files for our services.

** I remind you that there are only 3 of them (key-keeper, kubelet, containerd).
*** containerd (we will not consider it, because it generates the basic config itself and in most cases it is enough)

Let’s start with Key-keeper

The specifics of the formation of the config can be found here README.

The config is very long so don’t be surprised… .

key-keeper.issuers
## RUN ON EACH MASTER.
# Для каждой ноды свое имя!!!!
export MASTER_NAME="master-0"

In the first part of the config, specify the name of the node, all other variables are specified above.

## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/pki/

cat <<EOF > ${BASE_K8S_PATH}/pki/vault-config
---
issuers:

  - name: kube-apiserver-sa
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-apiserver-sa-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-sa/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-sa/role-id
      resource:
        kv:
          path: clusters/${CLUSTER_NAME}/kv
      timeout: 15s

  - name: etcd-ca
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: etcd-ca-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/etcd-ca/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/etcd-ca/role-id
      resource:
        CAPath: "clusters/${CLUSTER_NAME}/pki/etcd"
        rootCAPath: "clusters/${CLUSTER_NAME}/pki/root"
      timeout: 15s

  - name: etcd-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: etcd-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/etcd-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/etcd-client/role-id
      resource:
        role: etcd-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/etcd"
      timeout: 15s

  - name: etcd-peer
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: etcd-peer-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/etcd-peer/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/etcd-peer/role-id
      resource:
        role: etcd-peer
        CAPath: "clusters/${CLUSTER_NAME}/pki/etcd"
      timeout: 15s

  - name: front-proxy-ca
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: front-proxy-ca-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/front-proxy-ca/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/front-proxy-ca/role-id
      resource:
        CAPath: "clusters/${CLUSTER_NAME}/pki/front-proxy"
        rootCAPath: "clusters/${CLUSTER_NAME}/pki/root"
      timeout: 15s

  - name: front-proxy-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: front-proxy-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/front-proxy-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/front-proxy-client/role-id
      resource:
        role: front-proxy-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/front-proxy"
      timeout: 15s

  - name: kubernetes-ca
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kubernetes-ca-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kubernetes-ca/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kubernetes-ca/role-id
      resource:
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
        rootCAPath: "clusters/${CLUSTER_NAME}/pki/root"
      timeout: 15s

  - name: kube-apiserver
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-apiserver-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver/role-id
      resource:
        role: kube-apiserver
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-apiserver-cluster-admin-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-apiserver-cluster-admin-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-cluster-admin-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-cluster-admin-client/role-id
      resource:
        role: kube-apiserver-cluster-admin-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-apiserver-kubelet-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-apiserver-kubelet-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-kubelet-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-kubelet-client/role-id
      resource:
        role: kube-apiserver-kubelet-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-controller-manager-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-controller-manager-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-controller-manager-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-controller-manager-client/role-id
      resource:
        role: kube-controller-manager-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-controller-manager-server
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-controller-manager-server-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-controller-manager-server/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-controller-manager-server/role-id
      resource:
        role: kube-controller-manager-server
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-scheduler-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-scheduler-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-scheduler-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-scheduler-client/role-id
      resource:
        role: kube-scheduler-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-scheduler-server
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-scheduler-server-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-scheduler-server/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-scheduler-server/role-id
      resource:
        role: kube-scheduler-server
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kubeadm-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kubeadm-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kubeadm-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kubeadm-client/role-id
      resource:
        role: kubeadm-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kubelet-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kubelet-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kubelet-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kubelet-client/role-id
      resource:
        role: kubelet-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kubelet-server
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kubelet-server-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kubelet-server/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kubelet-server/role-id
      resource:
        role: kubelet-server
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s
EOF
key-keeper.certs
## RUN ON EACH MASTER.
cat <<EOF >> ${BASE_K8S_PATH}/pki/vault-config
certificates:

  - name: etcd-ca
    issuerRef:
      name: etcd-ca
    isCa: true
    ca:
      exportedKey: false
      generate: false
    hostPath: "${BASE_K8S_PATH}/pki/ca"

  - name: kube-apiserver-etcd-client
    issuerRef:
      name: etcd-client
    spec:
      subject:
        commonName: "system:kube-apiserver-etcd-client"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: etcd-peer
    issuerRef:
      name: etcd-peer
    spec:
      subject:
        commonName: "system:etcd-peer"
      usage:
        - server auth
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - $HOSTNAME
        - "${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}"
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/etcd"
    withUpdate: true

  - name: etcd-server
    issuerRef:
      name: etcd-peer
    spec:
      subject:
        commonName: "system:etcd-server"
      usage:
        - server auth
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        static:
          - 127.0.1.1
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - $HOSTNAME
        - "${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}"
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/etcd"
    withUpdate: true

  - name: front-proxy-ca
    issuerRef:
      name: front-proxy-ca
    isCa: true
    ca:
      exportedKey: false
      generate: false
    hostPath: "${BASE_K8S_PATH}/pki/ca"

  - name: front-proxy-client
    issuerRef:
      name: front-proxy-client
    spec:
      subject:
        commonName: "custom:kube-apiserver-front-proxy-client"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: kubernetes-ca
    issuerRef:
      name: kubernetes-ca
    isCa: true
    ca:
      exportedKey: false
      generate: false
    hostPath: "${BASE_K8S_PATH}/pki/ca"

  - name: kube-apiserver
    issuerRef:
      name: kube-apiserver
    spec:
      subject:
        commonName: "custom:kube-apiserver"
      usage:
        - server auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        static:
          - 29.64.0.1
        interfaces:
          - lo
          - eth*
        dnsLookup:
          - api.${BASE_CLUSTER_DOMAIN}
      ttl: 10m
      hostnames:
        - localhost
        - kubernetes
        - kubernetes.default
        - kubernetes.default.svc
        - kubernetes.default.svc.cluster
        - kubernetes.default.svc.cluster.local
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: kube-apiserver-kubelet-client
    issuerRef:
      name: kube-apiserver-kubelet-client
    spec:
      subject:
        commonName: "custom:kube-apiserver-kubelet-client"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: kube-controller-manager-client
    issuerRef:
      name: kube-controller-manager-client
    spec:
      subject:
        commonName: "system:kube-controller-manager"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-controller-manager"
    withUpdate: true

  - name: kube-controller-manager-server
    issuerRef:
      name: kube-controller-manager-server
    spec:
      subject:
        commonName: "custom:kube-controller-manager"
      usage:
        - server auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - kube-controller-manager.default
        - kube-controller-manager.default.svc
        - kube-controller-manager.default.svc.cluster
        - kube-controller-manager.default.svc.cluster.local
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-controller-manager"
    withUpdate: true

  - name: kube-scheduler-client
    issuerRef:
      name: kube-scheduler-client
    spec:
      subject:
        commonName: "system:kube-scheduler"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-scheduler"
    withUpdate: true

  - name: kube-scheduler-server
    issuerRef:
      name: kube-scheduler-server
    spec:
      subject:
        commonName: "custom:kube-scheduler"
      usage:
        - server auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - kube-scheduler.default
        - kube-scheduler.default.svc
        - kube-scheduler.default.svc.cluster
        - kube-scheduler.default.svc.cluster.local
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-scheduler"
    withUpdate: true

  - name: kubeadm-client
    issuerRef:
      name: kubeadm-client
    spec:
      subject:
        commonName: "custom:kubeadm-client"
        organizationalUnit:
          - system:masters
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: kubelet-client
    issuerRef:
      name: kubelet-client
    spec:
      subject:
        commonName: "system:node:${MASTER_NAME}-${CLUSTER_NAME}"
        organization:
          - system:nodes
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kubelet"
    withUpdate: true

  - name: kubelet-server
    issuerRef:
      name: kubelet-server
    spec:
      subject:
        commonName: "system:node:${MASTER_NAME}-${CLUSTER_NAME}"
      usage:
        - server auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - $HOSTNAME
        - "${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}"
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kubelet"
    withUpdate: true

secrets:
  - name: kube-apiserver-sa
    issuerRef:
      name: kube-apiserver-sa
    key: private  
    hostPath: ${BASE_K8S_PATH}/pki/certs/kube-apiserver/kube-apiserver-sa.pem

  - name: kube-apiserver-sa
    issuerRef:
      name: kube-apiserver-sa
    key: public  
    hostPath: ${BASE_K8S_PATH}/pki/certs/kube-apiserver/kube-apiserver-sa.pub
EOF

** Please note that certificates are issued with ttl=10 minutes and renewBefore=7minutes, this means that the certificate will be reissued every 3 minutes. Such small intervals are set to show the correct operation of the certificate reissue function. (Change them to the values ​​that are relevant to you.)

*** From version 1.22 Kubernetes (did not check below) all components are able to automatically detect that the configuration files on the file system have changed and re-read them without restarting.

key-keeper.token
## RUN ON EACH MASTER.
mkdir -p /var/lib/key-keeper/

cat <<EOF > /var/lib/key-keeper/bootstrap.token
${VAULT_MASTER_TOKEN}
EOF

** Do not be surprised that in this configuration file the master key from the Vault Server, as I said earlier, is a simplified version of the configuration.

*** If you study our Vault module for Terraform a little deeper, you will understand that temporary tokens are created there, which must be specified in bootstrap in the key-keeper config. Each issuer has its own token. Example -> https://github.com/fraima/kubernetes/blob/f0e4c7bc8f8d2695c419b17fec4bacc2dd7c5f18/modules/k8s-templates/cloud-init/templates/cloud-init-kubeadm-master.tftpl#L115

Most of the information describing why this is so, and not otherwise, is given in the articles:

K8S certificates or how to unravel vermicelli Part 1

K8S certificates or how to unravel vermicelli Part 2

An important feature is that we no longer think about fading certificates, Key-keeper takes over this task, we only need to set up monitoring and alerts to track the correct operation of the system.

Kubelet config

config.yaml
## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/kubelet

cat <<EOF >> ${BASE_K8S_PATH}/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: "${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem"

tlsCertFile: ${BASE_K8S_PATH}/pki/certs/kubelet/kubelet-server.pem
tlsPrivateKeyFile: ${BASE_K8S_PATH}/pki/certs/kubelet/kubelet-server-key.pem

authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
  - "${SERVICE_DNS}"
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 1s
nodeStatusUpdateFrequency: 1s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: false
runtimeRequestTimeout: 0s
serverTLSBootstrap: true
shutdownGracePeriod: 15s
shutdownGracePeriodCriticalPods: 5s
staticPodPath: "${BASE_K8S_PATH}/manifests"
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
containerLogMaxSize: 50Mi
maxPods: 250
kubeAPIQPS: 50
kubeAPIBurst: 100
podPidsLimit: 4096
serializeImagePulls: false
systemReserved:
  ephemeral-storage: 1Gi
featureGates:
  APIPriorityAndFairness: true
  DownwardAPIHugePages: true
  PodSecurity: true
  CSIMigrationAWS: false
  CSIMigrationAzureFile: false
  CSIMigrationGCE: false
  CSIMigrationvSphere: false
rotateCertificates: false
serverTLSBootstrap: true
tlsMinVersion: VersionTLS12
tlsCipherSuites:
  - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
  - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
allowedUnsafeSysctls:
  - "net.core.somaxconn"
evictionSoft: 
  memory.available: 3Gi 
  nodefs.available: 25%
  nodefs.inodesFree: 15%
  imagefs.available: 30%
  imagefs.inodesFree: 25%
evictionSoftGracePeriod:  
  memory.available: 2m30s
  nodefs.available: 2m30s
  nodefs.inodesFree: 2m30s
  imagefs.available: 2m30s
  imagefs.inodesFree: 2m30s
evictionHard:
  memory.available: 2Gi
  nodefs.available: 20%
  nodefs.inodesFree: 10%
  imagefs.available: 25%
  imagefs.inodesFree: 15%
evictionPressureTransitionPeriod: 5s 
imageMinimumGCAge: 12h 
imageGCHighThresholdPercent: 55
imageGCLowThresholdPercent: 50
EOF

** clusterDNS – it’s easy to get burned if you specify an incorrect value.

*** resolvConf – in Centos, Rhel, Almalinux can swear on the path, solved by the commands:

systemctl daemon-reload
systemctl enable systemd-resolved.service
systemctl start systemd-resolved.service

Documentation describing the problem:
https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues

System configs

The basic configuration of the operating system includes:

  1. Preparing disk space for /var/lib/etcd (not included in this installation)

  2. sysctl setup

  3. modprobe setup

  4. Installing base packages (wget, tar)

modprobe
## RUN ON EACH MASTER.
cat <<EOF >> /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
sysctls
## RUN ON EACH MASTER.
cat <<EOF >> /etc/sysctl.d/99-network.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF

sysctl --system

Kubeconfigs

In order for the basic components of the cluster and the administrator to communicate with Kube-apiserver, you need to form kubeconfig for each of them.

**admin.conf Kubeconfig with rights cluster-admin for basic cluster setup by the administrator.

admin.conf
## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}

cat <<EOF >> ${BASE_K8S_PATH}/admin.conf
---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem
    server: https://127.0.0.1:${KUBE_APISERVER_PORT}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kubeadm
  name: kubeadm@kubernetes
current-context: kubeadm@kubernetes
kind: Config
preferences: {}
users:
- name: kubeadm
  user:
    client-certificate: ${BASE_K8S_PATH}/pki/certs/kube-apiserver/kubeadm-client.pem
    client-key: ${BASE_K8S_PATH}/pki/certs/kube-apiserver/kubeadm-client-key.pem
EOF
kube-scheduler
## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/kube-scheduler/

cat <<EOF >> ${BASE_K8S_PATH}/kube-scheduler/kubeconfig
---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem
    server: https://127.0.0.1:${KUBE_APISERVER_PORT}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kube-scheduler
  name: kube-scheduler@kubernetes
current-context: kube-scheduler@kubernetes
kind: Config
preferences: {}
users:
- name: kube-scheduler
  user:
    client-certificate: ${BASE_K8S_PATH}/pki/certs/kube-scheduler/kube-scheduler-client.pem
    client-key: ${BASE_K8S_PATH}/pki/certs/kube-scheduler/kube-scheduler-client-key.pem
EOF
kube-controller-manager
## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/kube-controller-manager

cat <<EOF >> ${BASE_K8S_PATH}/kube-controller-manager/kubeconfig
---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem
    server: https://127.0.0.1:${KUBE_APISERVER_PORT}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kube-controller-manager
  name: kube-controller-manager@kubernetes
current-context: kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: kube-controller-manager
  user:
    client-certificate: ${BASE_K8S_PATH}/pki/certs/kube-controller-manager/kube-controller-manager-client.pem
    client-key: ${BASE_K8S_PATH}/pki/certs/kube-controller-manager/kube-controller-manager-client-key.pem
EOF
kubelet
## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/kubelet

cat <<EOF >> ${BASE_K8S_PATH}/kubelet/kubeconfig
---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem
    server: https://127.0.0.1:${KUBE_APISERVER_PORT}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kubelet
  name: kubelet@kubernetes
current-context: kubelet@kubernetes
kind: Config
preferences: {}
users:
- name: kubelet
  user:
    client-certificate: ${BASE_K8S_PATH}/pki/certs/kubelet/kubelet-client.pem
    client-key: ${BASE_K8S_PATH}/pki/certs/kubelet/kubelet-client-key.pem
EOF

Static Pods

kube-apiserver
## RUN ON EACH MASTER.
export ADVERTISE_ADDRESS=$(ip route get 1.1.1.1 | grep -oP 'src \K\S+')

cat <<EOF > /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: ${ADVERTISE_ADDRESS}:${KUBE_APISERVER_PORT}
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=${ADVERTISE_ADDRESS}
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --bind-address=0.0.0.0
    - --client-ca-file=/etc/kubernetes/pki/ca/kubernetes-ca.pem
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/ca/etcd-ca.pem
    - --etcd-certfile=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-etcd-client.pem
    - --etcd-keyfile=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-etcd-client-key.pem
    - --etcd-servers=${ETCD_SERVERS}
    - --kubelet-client-certificate=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-kubelet-client.pem
    - --kubelet-client-key=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-kubelet-client-key.pem
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/certs/kube-apiserver/front-proxy-client.pem
    - --proxy-client-key-file=/etc/kubernetes/pki/certs/kube-apiserver/front-proxy-client-key.pem
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/ca/front-proxy-ca.pem
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=${KUBE_APISERVER_PORT}
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-sa.pem
    - --service-cluster-ip-range=${SERVICE_CIDR}
    - --tls-cert-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-key.pem
    image: k8s.gcr.io/kube-apiserver:${KUBERNETES_VERSION}
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /livez
        port: ${KUBE_APISERVER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /readyz
        port: ${KUBE_APISERVER_PORT}
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /livez
        port: ${KUBE_APISERVER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /var/log/kubernetes/audit/
      name: k8s-audit
    - mountPath: /etc/kubernetes/pki/ca
      name: k8s-ca
      readOnly: true
    - mountPath: /etc/kubernetes/pki/certs
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/kube-apiserver
      name: k8s-kube-apiserver-configs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /var/log/kubernetes/audit/
      type: DirectoryOrCreate
    name: k8s-audit
  - hostPath:
      path: /etc/kubernetes/pki/ca
      type: DirectoryOrCreate
    name: k8s-ca
  - hostPath:
      path: /etc/kubernetes/pki/certs
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/kube-apiserver
      type: DirectoryOrCreate
    name: k8s-kube-apiserver-configs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
EOF

** Please note that the variable ADVERTISE_ADDRESS requires the Internet, if it is not there, just specify the IP ADDRESS of the node.

kube-controller-manager
## RUN ON EACH MASTER.
cat <<EOF > /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager/kubeconfig
    - --authorization-always-allow-paths=/healthz,/metrics
    - --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager/kubeconfig
    - --bind-address=${ADVERTISE_ADDRESS}
    - --client-ca-file=/etc/kubernetes/pki/ca/kubernetes-ca.pem
    - --cluster-cidr=${SERVICE_CIDR}
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca/kubernetes-ca.pem
    - --cluster-signing-key-file=
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/kube-controller-manager/kubeconfig
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/ca/front-proxy-ca.pem
    - --root-ca-file=/etc/kubernetes/pki/ca/kubernetes-ca.pem
    - --secure-port=${KUBE_CONTROLLER_MANAGER_PORT}
    - --service-account-private-key-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-sa.pem
    - --tls-cert-file=/etc/kubernetes/pki/certs/kube-controller-manager/kube-controller-manager-server.pem
    - --tls-private-key-file=/etc/kubernetes/pki/certs/kube-controller-manager/kube-controller-manager-server-key.pem
    - --use-service-account-credentials=true
    image: k8s.gcr.io/kube-controller-manager:${KUBERNETES_VERSION}
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /healthz
        port: ${KUBE_CONTROLLER_MANAGER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /healthz
        port: ${KUBE_CONTROLLER_MANAGER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir
    - mountPath: /etc/kubernetes/pki/ca
      name: k8s-ca
      readOnly: true
    - mountPath: /etc/kubernetes/pki/certs
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/kube-controller-manager
      name: k8s-kube-controller-manager-configs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
  - hostPath:
      path: /etc/kubernetes/pki/ca
      type: DirectoryOrCreate
    name: k8s-ca
  - hostPath:
      path: /etc/kubernetes/pki/certs
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/kube-controller-manager
      type: DirectoryOrCreate
    name: k8s-kube-controller-manager-configs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
EOF
kube-scheduler
## RUN ON EACH MASTER.
cat <<EOF > /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/kube-scheduler/kubeconfig
    - --authorization-kubeconfig=/etc/kubernetes/kube-scheduler/kubeconfig
    - --bind-address=${ADVERTISE_ADDRESS}
    - --kubeconfig=/etc/kubernetes/kube-scheduler/kubeconfig
    - --leader-elect=true
    - --secure-port=${KUBE_SCHEDULER_PORT}
    - --tls-cert-file=/etc/kubernetes/pki/certs/kube-scheduler/kube-scheduler-server.pem
    - --tls-private-key-file=/etc/kubernetes/pki/certs/kube-scheduler/kube-scheduler-server-key.pem
    image: k8s.gcr.io/kube-scheduler:${KUBERNETES_VERSION}
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /healthz
        port: ${KUBE_SCHEDULER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /healthz
        port: ${KUBE_SCHEDULER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/pki/ca
      name: k8s-ca
      readOnly: true
    - mountPath: /etc/kubernetes/pki/certs
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/kube-scheduler
      name: k8s-kube-scheduler-configs
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/ca
      type: DirectoryOrCreate
    name: k8s-ca
  - hostPath:
      path: /etc/kubernetes/pki/certs
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/kube-scheduler
      type: DirectoryOrCreate
    name: k8s-kube-scheduler-configs
status: {}
EOF
etcd
## RUN ON EACH MASTER.
cat <<EOF > /etc/kubernetes/manifests/etcd.yaml
---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - name: etcd
    command:
      - etcd
    args:
      - --name=${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}
      - --initial-cluster=${ETCD_INITIAL_CLUSTER}
      - --initial-advertise-peer-urls=https://${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}:${ETCD_PEER_PORT}
      - --advertise-client-urls=https://${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}:${ETCD_SERVER_PORT}
      - --peer-trusted-ca-file=/etc/kubernetes/pki/ca/etcd-ca.pem
      - --trusted-ca-file=/etc/kubernetes/pki/ca/etcd-ca.pem
      - --peer-cert-file=/etc/kubernetes/pki/certs/etcd/etcd-peer.pem
      - --peer-key-file=/etc/kubernetes/pki/certs/etcd/etcd-peer-key.pem
      - --cert-file=/etc/kubernetes/pki/certs/etcd/etcd-server.pem
      - --key-file=/etc/kubernetes/pki/certs/etcd/etcd-server-key.pem
      - --listen-client-urls=https://0.0.0.0:${ETCD_SERVER_PORT}
      - --listen-peer-urls=https://0.0.0.0:${ETCD_PEER_PORT}
      - --listen-metrics-urls=http://0.0.0.0:${ETCD_METRICS_PORT}
      - --initial-cluster-token=etcd
      - --initial-cluster-state=new
      - --data-dir=/var/lib/etcd
      - --strict-reconfig-check=true
      - --peer-client-cert-auth=true
      - --peer-auto-tls=true
      - --client-cert-auth=true
      - --snapshot-count=10000
      - --heartbeat-interval=250
      - --election-timeout=1500
      - --quota-backend-bytes=0
      - --max-snapshots=10
      - --max-wals=10
      - --discovery-fallback=proxy
      - --auto-compaction-retention=8
      - --force-new-cluster=false
      - --enable-v2=false
      - --proxy=off
      - --proxy-failure-wait=5000
      - --proxy-refresh-interval=30000
      - --proxy-dial-timeout=1000
      - --proxy-write-timeout=5000
      - --proxy-read-timeout=0
      - --metrics=extensive
      - --logger=zap
    image: k8s.gcr.io/etcd:${ETCD_VERSION}
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: ${ETCD_METRICS_PORT}
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /health
        port: ${ETCD_METRICS_PORT}
        scheme: HTTP
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/certs/etcd
      name: etcd-certs
    - mountPath: /etc/kubernetes/pki/ca
      name: ca
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
      null
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/certs/etcd
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /etc/kubernetes/pki/ca
      type: DirectoryOrCreate
    name: ca
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}
EOF

systemd

Now it’s up to the small, turn on all the services and add them to autorun.

services
## RUN ON EACH MASTER.
systemctl daemon-reload
systemctl enable  key-keeper.service
systemctl start   key-keeper.service
systemctl enable  kubelet.service
systemctl start   kubelet.service
systemctl enable  containerd.service
systemctl start   containerd.service
systemctl enable  systemd-resolved.service
systemctl start   systemd-resolved.service

Examination

So, the configuration is ready, we have applied all the steps on each wizard, now we need to check that everything works correctly.

First, we check that the certificates are ordered.

tree /etc/kubernetes/pki/ | grep -v key | grep pem | wc -l
Puluch 17 certificates

root@master-1-example:/home/dkot# tree

/etc/kubernetes/pki/
├── ca
│   ├── etcd-ca.pem
│   ├── front-proxy-ca.pem
│   └── kubernetes-ca.pem
├── certs
│   ├── etcd
│   │   ├── etcd-peer-key.pem
│   │   ├── etcd-peer.pem
│   │   ├── etcd-server-key.pem
│   │   └── etcd-server.pem
│   ├── kube-apiserver
│   │   ├── front-proxy-client-key.pem
│   │   ├── front-proxy-client.pem
│   │   ├── kubeadm-client-key.pem
│   │   ├── kubeadm-client.pem
│   │   ├── kube-apiserver-etcd-client-key.pem
│   │   ├── kube-apiserver-etcd-client.pem
│   │   ├── kube-apiserver-key.pem
│   │   ├── kube-apiserver-kubelet-client-key.pem
│   │   ├── kube-apiserver-kubelet-client.pem
│   │   ├── kube-apiserver.pem
│   │   ├── kube-apiserver-sa.pem
│   │   └── kube-apiserver-sa.pub
│   ├── kube-controller-manager
│   │   ├── kube-controller-manager-client-key.pem
│   │   ├── kube-controller-manager-client.pem
│   │   ├── kube-controller-manager-server-key.pem
│   │   └── kube-controller-manager-server.pem
│   ├── kubelet
│   │   ├── kubelet-client-key.pem
│   │   ├── kubelet-client.pem
│   │   ├── kubelet-server-key.pem
│   │   └── kubelet-server.pem
│   └── kube-scheduler
│       ├── kube-scheduler-client-key.pem
│       ├── kube-scheduler-client.pem
│       ├── kube-scheduler-server-key.pem
│       └── kube-scheduler-server.pem

If there are fewer certificates or none at all, we read the service logs key-keeper.
journalctl -xefu key-keeper There you will find answers to all questions.

Common mistakes:

  • Invalid configuration file.

  • Key-keeper cannot log in.

  • There are no policies for using the role of token or approle.

  • The certificate being ordered has arguments that are not allowed in the vault role.

Checking that all containers are running and working correctly

crictl  --runtime-endpoint unix:///run/containerd/containerd.sock ps -a

CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
08e2c895b4a20       23f16c2de4792       34 minutes ago      Running             kube-apiserver            4                   b89014de1d7d8
5f1f770280cc7       23f16c2de4792       35 minutes ago      Exited              kube-apiserver            3                   b89014de1d7d8
3313b1ec20e0a       aebe758cef4cd       35 minutes ago      Running             etcd                      2                   cb5b2ca15cc28
e91d3bbb55b97       aebe758cef4cd       37 minutes ago      Exited              etcd                      1                   cb5b2ca15cc28
b3b004e6896db       4bf8b96f38e3b       39 minutes ago      Running             kube-controller-manager   0                   9904b2d296bca
77d316d50693a       ea40e3ed8cf2f       39 minutes ago      Running             kube-scheduler            0                   24fac1b156ea4

If some container is in the status EXITED – look at the logs.

crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs $CONTAINER_ID

Checking the assembled ETCD cluster

endpoint status
## RUN ON EACH MASTER.
export ETCDCTL_CERT=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-etcd-client.pem \
export ETCDCTL_KEY=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-etcd-client-key.pem \
export ETCDCTL_CACERT=/etc/kubernetes/pki/ca/etcd-ca.pem \

etcd_endpoints () {
export ENDPOINTS=$(echo $(ENDPOINTS=127.0.0.1:${ETCD_SERVER_PORT}
etcdctl \
--endpoints=$ENDPOINTS \
member list |
awk '{print $5}' |
sed "s/,//") | sed "s/ /,/g")
}

etcd_endpoints

estat () {
etcdctl \
--write-out=table \
--endpoints=$ENDPOINTS \
endpoint status
}

estat

It is useful to add this piece to bashrc, for convenient work, status check or etcd debugging.

At the output, you should get a similar picture: (The number of instances should be equal to the value MASTER_COUNT )

root@master-1-example:/home/dkot# estat
+--------------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|                  ENDPOINT                  |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://master-1.example.dobry-kot.ru:2379 | 530f4c34efefa4a2 |   3.5.3 |  8.3 MB |      true |      false |         2 |       6433 |               6433 |        |
| https://master-2.example.dobry-kot.ru:2379 | 85281728dcb33e5f |   3.5.3 |  8.3 MB |     false |      false |         2 |       6433 |               6433 |        |
| https://master-0.example.dobry-kot.ru:2379 | ae74003c0ad34ecd |   3.5.3 |  8.3 MB |     false |      false |         2 |       6433 |               6433 |        |
+--------------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

And we check that the Kubernetes API is responding, and all the nodes have been added.

kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf

NAME               STATUS     ROLES    AGE   VERSION
master-0-example   NotReady   <none>   30m   v1.23.12
master-1-example   NotReady   <none>   29m   v1.23.12
master-2-example   NotReady   <none>   25m   v1.23.12

We run this command on one of the masters and see that all nodes have been added and everything is in the status NotReadydo not be afraid, this is due to the fact that the CNI Plugin is not installed.

** I hope you didn’t forget what we wrote about the certificate kube-apiserver-kubelet-client.
In our installation, this certificate will not have rights initially, but kube-apiserver-(y) still need permissions to access Kubelet on nodes, because it is with this certificate that they are performed and pass through RBAC operations”kubectl-exec” and “kubectl logs”.
Surprisingly, we have already been taken care of, and there is already a suitable role in the fresh cluster, so let’s just add the actual ClusterRolebinding and check the logs.

ClusterRoleBinding
cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: custom:kube-apiserver-kubelet-client
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kubelet-api-admin
subjects:
- kind: User
  apiGroup: rbac.authorization.k8s.io
  name: custom:kube-apiserver-kubelet-client
EOF

** Please note that the certificate is issued with CN=custom:kube-apiserver-kubelet-client (if customization of the name is required, you need to edit in the terraform module)

TOTAL

In this article, we have achieved all our goals:

  1. We updated the deployment stages of the kubernetes cluster by expanding the description and adding the actual configurations.

  2. They showed that even a basic configuration, without settings for high availability, integration with external systems, is a laborious process and requires a good understanding of the product.

  3. All certificates are issued through a key-keeper (client) in a centralized Vault store and are re-issued if they expire.

In the next article, I want to raise the issue of automating the deployment of a Kubernetes cluster through Terraform and present the first version of cloud kubernetes for Yandex Cloud, which has almost the same functionality as Yandex Managed Service for Kubernetes.

Subscribe, put a finger up if you liked the article.

We are waiting for you to discuss our work in https://t.me/fraima_ru

Useful reading:

K8S certificates or how to unravel vermicelli Part 1

K8S certificates or how to unravel vermicelli Part 2

https://github.com/kelseyhightower/kubernetes-the-hard-way

Similar Posts

Leave a Reply