This article is for beginners. If you are an experienced ninja, just remember how once this information could be useful for you too 😉
Kubernetes was created by Google based on its own experience with containers in a production environment, and it owes much of its success to Google.
So what is Kubernetes and why do we, in principle, want to use it, and not ordinary containers like Docker.
Let’s remember what is containers…
Containers package the services that make up an application and make them portable across different computing environments for both development and testing and production use. With containers, it is easy to quickly grow the number of application instances to meet peak demand. And because containers use host OS resources, they are much lighter than virtual machines. This means that containers make very efficient use of the underlying server infrastructure.
Everything would be fine, but there is one thing – container runtime API (Container Launcher API) works well for managing individual containers, but not at all for managing applications on hundreds of containers and on a large number of hosts.
Containers need to connect to the outside world and be managed for load balancing, distribution and scheduling.
This is what you need Kubernetes…
Kubernetes is an open source system for deploying, scaling, and managing containerized applications.
Kubernetes is essentially more than just an orchestration system. Technically, orchestration is about doing a specific workflow: first do A, then B, then C.
Kubernetes removes the direct need for this. It has management processes that are in fact independent and composable. The main task of management processes is to transfer the current state to the desired state. Now we don’t care what the route will be from A to C, which eliminates centralized control.
As a result, the system is now easier to use, more powerful, more reliable, and more robust and expandable.
Containers allow applications to be divided into smaller parts with a clear separation of concerns. The abstraction layer provided for a single container image allows us to understand how distributed applications are built. This modular approach allows for faster development with smaller and more focused teams, each with responsibility for specific containers. It also allows us to isolate dependencies and make more use of smaller components.
You cannot do this with containers alone. But in Kubernetes, this can be achieved using Pods (pods).
Pod (under) – it is a group of one or more containers with shared storage / network resources and specification of how to run containers. It is also a separate instance of the application. By hosting containers in this way, Kubernetes removes the temptation to squeeze too many features into a single container image.
Concept Service used in Kubernetes to group multiple pods that perform the same function. Services are highly configurable for purposes such as discovery, scale out, and load balancing.
Kubernetes, according to the official documentation, can also provide you with:
Using DNS name or own IP address service monitoring and load balancing Kubernetes can discover the container. With high traffic in it, Kubernetes will load balance and distribute network traffic so that the deployment is stable.
The storage system of your choice (like local storage, public cloud providers and more) can be automatically mounted using storage orchestration Kubernetes.
Automatic deployment and rollbacks.
Kubernetes through the description of the desired state of the deployed containers (manifests, written in yaml) can change the actual state to the desired one. That is, creating new containers for deployment, deleting existing containers and distributing all their resources to a new container in Kubernetes can be automated.
Automatic load balancing.
Kubernetes itself places containers on your nodes in a way that makes the most efficient use of resources. You just have to specify how much CPU, RAM is required for each container and provide a cluster of nodes where the containers will run.
If Something went wrong in the work of containers, Kubernetes itself restarts, replaces and shuts down containers that do not pass the health check.
Confidential information and configuration management.
Passwords, OAuth tokens and SSH keys can be stored and managed by Kubernetes without changing container images and without revealing sensitive information in the stack configuration.
As you can see in the figure, this is a visual demonstration of what is inside Kubernetes using the example of one master node and one Worker node.
On Master node is Kubernetes Control Plane (kube-scheduler, kube-controller-manager, kube-apiserver, etcd), with which the entire Kubernetes cluster is managed.
On Worker node contains container runtime, kubelet and kube-proxy.
Kubelet it is the main “node agent” that runs on every node. Ensures that the containers in the Pod are up and running. Does not manage containers that were not created by Kubernetes.
Kube-proxy it is a daemon on each node, manages iptable rules on the host to achieve service load balancing (one of the implementations), and monitors Service and Endpoint changes.
A more detailed examination of the architecture, the basic concepts of Kubernetes in theory and, most importantly, in practice, along with such interesting topics as clustering, highload web, DBMS administration, virtualization and containerization, orchestration you can study on the course Administrator Linux. Advanced…
You can also get answers to questions such as networking in Kubernetes, how to publish an application, and how DNS works in Kubernetes.
Well, how can it be without such an important issue as data storage, monitoring and Kubernetes secrets Hashicorp Vault.
And right now we We invite everyone to a free demo lesson on the topic “Luster Cluster File System”… As part of the lesson, we will look at the architecture and components of the Luster file system. Let’s analyze the scope of the file system and its features. Let’s answer the questions how file striping is used and what is the LNET network transport layer. In the practical part, we will install and configure the file system manually. Let’s see an example of the Integrated Manager for Luster (IML) graphical user interface