Conference DEVOXX UK. Choose a framework: Docker Swarm, Kubernetes or Mesos. Part 2
- Local development.
- Deployment Features
- Multicontainer applications.
- Service discovery service discovery.
- Scaling service.
- Run-once assignments.
- Integration with Maven.
- A “rolling” update.
- Creating a Couchbase database cluster.
As a result, you will get a clear idea of what each orchestration instrument has to offer, and learn how to use these platforms effectively.
Arun Gupta is Amazon Web Services’ premier open-source product technologist who has been developing Sun, Oracle, Red Hat, and Couchbase developer communities for over 10 years. He has extensive experience working in leading cross-functional teams involved in the development and implementation of marketing campaigns and programs. He led the Sun engineering team, is one of the founders of the Java EE team and the creator of the American branch of Devoxx4Kids. Arun Gupta is the author of more than 2 thousand posts in IT blogs and has made presentations in more than 40 countries.
Conference DEVOXX UK. Choose a framework: Docker Swarm, Kubernetes or Mesos. Part 1
Scale scaling concept means the ability to control the number of replicas by increasing or decreasing the number of application instances.
For example: if you want to scale the system to 6 replicas, use the docker service scale web = 6 command.
Along with the Replicated Service concept in Docker, there is the concept of shared services Global Service. Let’s say I want to run an instance of the same container on each node of the cluster, in this case it is a container of the Prometheus web monitoring application. This application is used when you need to collect metrics about the operation of hosts. In this case, you use the subcommand – – mode = global – – name = prom prom / Prometheus.
As a result, the Prometheus application will be launched on all nodes of the cluster, without exception, and if new nodes are added to the cluster, it will automatically start in the container and in these nodes. I hope you understand the difference between Replicated Service and Global Service. Usually the Replicated Service is where you start.
So, we have examined the basic concepts, or basic entities of Docker, and now we will consider the entities of Kubernetes. Kubernetes is also a kind of planner, a platform for container orchestration. It must be remembered that the main concept of the scheduler is knowing how to schedule containers on different hosts. If you go to a higher level, we can say that orchestration means expanding your capabilities to manage clusters, obtain certificates, etc. In this sense, both Docker and Kubernetes are orchestration platforms, both of which have a built-in scheduler.
Orchestration is an automated management of related entities – clusters of virtual machines or containers. Kubernetes is a collection of services that implement a container cluster and its orchestration. It does not replace Docker, but significantly expands its capabilities, simplifying the management of deployment, network routing, resource consumption, load balancing and fault tolerance of running applications.
Compared to Kubernetes, Docker is focused on working with containers, creating their images using a docker file. If we compare the Docker and Kubernetes objects, we can say that Docker manages the containers, while Kubernetes manages the Docker itself.
How many of you have dealt with Rocket containers? Does anyone use Rocket in production? Only one person raised his hand in the hall, this is a typical picture. This is an alternative to Docker, which still has not taken root in the developer community.
So, the main essence of Kubernetes is Pod. It is a related group of containers that use a common namespace, shared storage, and shared IP address. All containers in the hearth communicate with each other through the local host. This means that you will not be able to place the application and the database in the same hearth. They must be placed in different pods, because they have different scaling requirements.
Thus, you can place in one pod, for example, a WildFly container, login container, proxy container, or cache container, and you must responsibly approach the composition of the container components that you are going to scale.
Usually you wrap your container in the Replica Set, because you want to run a certain number of instances in the hearth. Replica Set tells you to start as many replicas as the Docker scaling service requires, and tells you when and how to do it.
Pods are similar to containers in the sense that if a pod fails on one host, it restarts on a different pod with a different IP address. As a Java developer, you know that when you create a java application and it communicates with the database, you cannot rely on a dynamic IP address. In this case, Kubernetes uses Service – this component publishes the application as a network service, creating a static permanent network name for a set of hearths, while simultaneously balancing the load between the hearths. It can be said that this is the service name of the database, and the java application does not rely on the IP address, but only interacts with the database constant name.
This is achieved by the fact that each Pod is supplied with a specific Label, which is stored in the distributed storage etcd, and Service monitors these labels, providing a link between the components. That is, pods and services stably interact with each other using these labels.
Now let’s look at how to create a Kubernetes cluster. For this, as in Docker, we need a master node and a working node. A node in a cluster is usually represented by a physical or virtual machine. Here, as in Docker, the wizard is a central control structure that allows you to control the entire cluster through the scheduler and controller manager. By default, a master node exists in the singular, but there are many new tools that allow you to create multiple master nodes.
Master-node provides user interaction using the API server and contains the distributed storage etcd, which contains the configuration of the cluster, the status of its objects and metadata.
Worker-node work nodes are designed exclusively for running containers, for this they have two Kubernetes services installed – a proxy service network router and a kubelet scheduler agent. While these nodes are running, Docker monitors them using systemd (CentOS) or monit (Debian), depending on which operating system you are using.
Consider the Kubernetes architecture more broadly. We have a Master, which includes an API server (pods, services, etc.), managed using the CLI kubectl. Kubectl allows you to create Kubernetes resources. It sends commands to the API server such as “create under”, “create service”, “create a set of replicas”.
Further here is the Scheduler, the Controller Manager, and the etcd repository. The controller manager, having received the instructions of the API server, maps replica labels to hearth labels, ensuring stable interaction between components. The scheduler, having received the task of creating under, scans the working nodes and creates it where it is provided. Naturally, he gets this information from etcd.
Next, we have several working nodes, and the API server communicates with the Kubelet agents contained in them, telling them how the pods should be created. Here is the proxy that gives you access to an application that uses these pods. My client is shown on the right on the slide – this is an Internet request that goes to the load balancer, that one turns to the proxy, which distributes the request among the submissions and sends the response back.
You see the final slide, which depicts the Kubernetes cluster and how all its components work.
Let’s talk more about Service Discovery and the Docker load balancer. When you launch your Java application, this usually happens in multiple containers on multiple hosts. There is a component of Docker Compose, which makes it easy to run multi-container applications. It describes multicontainer applications and launches them using one or more yaml configuration files.
By default, these are docker-compose.yaml and docker-compose.override.yaml files, with multiple files specified using – f. In the first file you write the service, images, replicas, tags, etc. The second file is used to overwrite the configuration. After creating docker-compose.yaml, it deploys to the multi-host cluster that Docker Swarm previously created. You can create one basic configuration file docker-compose.yaml, in which you will add specific configuration files for different tasks, indicating specific ports, images, etc., we will talk about this later.
On this slide, you see a simple example of a Service Discovery file. The first line indicates the version, and line 2 indicates that it concerns the db and web services.
I want my web-service to communicate with the db-service after “raising”. These are simple java applications deployed in WildFly containers. In line 11, I write the environment couchbase_URI = db. This means that my db service uses this database. In line 4, the couchbase image is indicated, and in lines 5-9 and 15-16, respectively, the ports necessary to ensure the operation of my services.
The key to understanding the service discovery process is that you create some kind of dependency. You indicate that the web container should start before the db container, but this is only at the container level. How your application reacts, how it starts – these are completely different things. For example, usually the container “rises” in 3-4 seconds, however, the launch of the database container takes much longer. So the logic of launching your application should be “baked” in your java application. That is, the application must ping the database to make sure it is ready. Since the couchbase database is a REST API, you should call this API and ask, “Hey, are you ready? If so, then I am ready to send you inquiries! ”
Thus, dependencies at the container level are determined using the docker-compose service, but at the application level, dependencies and viability are determined based on responsibility surveys. Then you take the docker-compose.yaml file and deploy it in the multi-host Docker using the docker stack deploy command and the subcommand – – compose-file = docker-compose.yaml webapp. So, you have a large stack in which there are several services that solve several problems. Basically, these are the tasks of launching containers.
Consider how the load balancer works. In the above example, using the docker service create command, I created a service – the WildFly container, specifying the port number in the form 8080: 8080. This means that port 8080 on the host – the local machine – will be linked to port 8080 inside the container, so you can access the application through localhost: 8080. This will be the access port to all work nodes.
Remember that the load balancer is host oriented, not container oriented. It uses ports 8080 of each host, regardless of whether the containers are running on the host or not, because now the container works on one host, and after the task is completed, it can be transferred to another host.
So, client requests are received by the load balancer, he redirects them to any of the hosts, and if, using the IP address table, he gets to the host with an container not running, he automatically redirects the request to the host on which the container is running.
A single hop is not expensive, but it is completely “seamless” in terms of scaling your services up or down. Thanks to this, you can be sure that your request will go exactly to the host where the container is running.
Now let’s look at how Service Discovery in Kubernetes works. As I said, a service is an abstraction in the form of a set of hearths with the same IP address and port number and a simple TCP / UDP load balancer. The following slide shows the Service Discovery configuration file.
Creating resources such as pods, services, replicas, etc. happens based on the configuration file. You see that it is divided into 3 parts using lines 17 and 37, which consist only of – – -.
Let’s look at line 39 first – it says kind: ReplicaSet, that is, what we are creating. Lines 40-43 contain metadata, with lines 44 specifying the specification for our replica set. Line 45 indicates that I have 1 replica, its Labels are listed below, in this case the name is wildfly. Even lower, starting from line 50, it is indicated in which containers this replica should be launched – this is wildfly-rs-pod, and lines 53-58 contain the specification of this container.
To be continued very soon …
A bit of advertising 🙂
Thank you for staying with us. Do you like our articles? Want to see more interesting materials? Support us by placing an order or recommending to your friends, cloud VPS for developers from $ 4.99, A unique analogue of entry-level servers that was invented by us for you: The whole truth about VPS (KVM) E5-2697 v3 (6 Cores) 10GB DDR4 480GB SSD 1Gbps from $ 19 or how to divide the server? (options are available with RAID1 and RAID10, up to 24 cores and up to 40GB DDR4).
Dell R730xd 2 times cheaper at the Equinix Tier IV data center in Amsterdam? Only here 2 x Intel TetraDeca-Core Xeon 2x E5-2697v3 2.6GHz 14C 64GB DDR4 4x960GB SSD 1Gbps 100 TV from $ 199 in the Netherlands! Dell R420 – 2x E5-2430 2.2Ghz 6C 128GB DDR3 2x960GB SSD 1Gbps 100TB – from $ 99! Read about How to Build Infrastructure Bldg. class c using Dell R730xd E5-2650 v4 servers costing 9,000 euros for a penny?