Deploying your first application with Kubernetes

During times of popularity microservice architecture There is an increasing need for platforms and solutions to automate complex or manual processes. For example, when we need to work with highly loaded applications that have their own steep peaks and troughs in traffic, it would be very difficult to build an architecture without using Docker containers (virtual machines) that allow you to split the application into parts, interacting with each other. However, the task becomes even more complex if we are working with an application that is divided into other applications, which in turn are divided into containers, and this entire system processes millions of user requests and stores an unimaginable amount of information.

Vivid examples of such applications: YouTube or Google. Naturally, such services cannot be deployed on one machine, so the architecture thousands of computers called worker nodes are used. However, these Nodes, as parts of a general mechanism, can fail, and then they need to be raised again; moreover, it would be good to constantly monitor each Node and read its state. It becomes very difficult to perform such tasks manually. And, perhaps, specifically for these sites, and maybe not, but Google developed Kubernetes technologywhich plays the role container orchestration. Kubernetes monitors Nodes and raises those that fail, but this is not its only task. The technology can also turn off unused Nodes, optimizing resource consumption.

Also interesting is the fact that about 50% of companies and teams already use relatively new technology Kubernetes in your projects.

Colleagues, hello everyone! My name is Vladimir Amelin, and I am the technical lead at Quillis, and today I will try to help you understand the logic of Kubernetes.

Now that few people may have doubts about the relevance of this technology, I would like to reassure those who think that it will be very difficult in the future… I promise that everything will be quite easy, and we will even be able to launch the first application ourselves, even though Kubernetes is This is not the easiest solution to implement either. However, in this article we will start small:

  1. Let’s learn how Kubernetes works.

  2. Let’s look at the architecture of this technology.

  3. Let’s deploy a cluster from one machine, where we will launch a simple application on a local computer using Kubernetes.

Answering the question of who would benefit from mastering this technology, we can confidently say about DevOps at any level, as well as engineers who would benefit from understanding the technology in order to interact with it on projects.

And a few more words in favor of technology for even greater motivation:

  1. With Kubernetes, you can create Pipelines, that is, combine the entire development process into a single system, when a developer writes code, sends it to Git to the main branch, the administrator presses a button, and this code is automatically sent to the test bench. As a result, several new versions of the product can be published per day, and this greatly simplifies and automates development.

  2. Kubernetes makes it easy and convenient to scale and run anywhere. Allows you to easily do version rollbacks, load balancing, self-healing, orchestration, storage, security reconfiguration, etc.

Architecture: how everything works

  1. We have an interface (User Interface) which allows you to manage the application.

  2. A little lower on the service diagram kubectl who manages the management team Nodes (Kubernetes master cluster), and tells you what needs to be done.

  3. On the left we see gray Master Nodes (API Server, Sheduler, Controller-Manager, etcd) – they think, distribute and manage everything else further down the line.

  4. Further on the right in the cluster workers are deployed Nodes (Worker node), i.e. already those Nodes, within which the application is deployed.

  5. For each of Nodes (Worker node) also installed Docker and already known to us kubectl who controls everything.

  6. IN Nodes (Worker node) also installed Kube-proxywhich is responsible for the rest of the interaction.

This is what proper complex architecture looks like Kubernetes, but for example, we will launch a simple cluster inside one machine – inside a computer. In this application and Master NodeAnd Worker NodeAnd kubectl will be hosted on our local computer.

What will we do now?

  • Let’s launch a local Minikube cluster;

  • Let’s stop him;

  • Let’s deploy a simple Getting-Started application.

Installing and stopping minikube – guide

  1. Docker Desktop Application

Download, install and run as a Docker Desktop application. Everything is simple here, here is the link:

  1. Launching minikube

    2.1. Download and install minikube from the link:

    2.2. On the page that opens, select the computer settings, check the system requirements, free up 20GB of space if you currently have less free memory on your computer, and click on the download link.

    2.3. Copy the code to add the environment variable and enter this command in PowerShell.

  2. 2.4. Now we start the cluster with the “minikube start” command.

    2.5. Open the Docker Desktop application window to see that a running container has appeared in it.

  3. Launching Kubectl

    3.1. Follow the link

    3.2. On the page that appears, select the OS and download kubectl.exe.

    3.3. We launch kubectl.exe through the terminal through the folder where the file was downloaded.

    3.4. Next, we copy this file from the folder where it was downloaded to the root folder of the minikube application (usually a folder on drive C), we do this so that this kubectl is available regardless of the directory in which it is located.

    3.5. Just in case, you can double-check whether everything worked out by going to My Computer – Advanced System Settings – Environment Variables – Path – Edit. And there, among the list of paths, there should be a path to the folder C:\\minikube. If it is not there, you need to add it.

Now we have a cluster deployed, and all that remains is to deploy the application…

  1. Launching a dashboard

    Using the “minikube dashboard” command, we launch the application’s web interface through the terminal, wait for it to load, and a new tab with a dashboard will appear in the browser.

  2. Creating a Deployment

    Deployment is a thing that will check the health of Pods. Pods are a structure that combines containers. Follow the link: Find the “Creating a Deployment” section, copy the code from the window and enter it into PowerShell.

    To check, you can go to the dashboard launched in step 4 in the browser, and on the left in the Deployments tab we should see the created Deployment.

    There, in the Pods tab, you can see one created Pod.

  3. Creating and launching a Service

    (This step is necessary for the application to become accessible not only via the internal IP of Kubernetes)

    6.1. Using the command, we indicate which port the application uses so that we can then publish it externally “kubectl expose deployment hello-node –type=LoadBalancer –port=8080

    Now, if we go to the Services tab, we will see an unstarted (orange circle) “hello-node” service.

    6.2. We start the service using the command “minikube service hello-node”.

Congratulations! We just launched our first simple application inside Kubernetes! It can’t do much yet, but still!

  1. Cleaning

    7.1. Now we can remove the service and deployment with the commands:

    Attention! These commands are executed in a new terminal window, since the old one is loaded by the Service startup process from section 6.2.

    7.2. Then we can stop the cluster with the “minikube stop” command.

    7.3. And we can also delete this local Kubernetes cluster with the “minikube delete” command.

The secret of getting ahead is “getting-started” (c)

Translation: “The secret to getting ahead is to start.” Author – Mark Twain.

Now we are ready to launch a real application, for example a simple task scheduler codenamed “getting-started”.

First, let’s create an application image. It can be downloaded from Docker Hub, an open image registry. An image is essentially a ready-to-run application.

  1. Knowing the name of the application and the fact that in my directory it is uploaded to Docker Hub, we execute the command in the terminal to download “docker pull amelinvd/getting-started”.

  2. After downloading, in the Docker Desktop application, in the Images section, we should see the downloaded image “amelinvd/getting-started”.

There is nothing left to launch the application, be patient.

  1. Create a local container again with the “minikube start” command (this will take a little time).

  2. Then refer to the instructions at the link find the section on deploying applications “Deploy applications”.

  1. Our task is to take the first command and change it, replacing the name of the application with “getting-started”, and the path to the image with “amelinvd/getting-started”. You will get the following command, which must be entered in the terminal: “kubectl create deployment getting-started –image=amelinvd/getting-started”

In PowerShell we will see a message that the deployment has been created.

  1. Next, we take the second command from the instructions and also change the name of the application to getting-started, and change the port to 3000 used by our application. We get the following command: “kubectl expose deployment getting-started –type=NodePort –port=3000”

  2. Let’s create the already familiar Service. We take the third command from the instructions and change the name of the application to “getting-started”. It should look like this: “kubectl get services getting-started”

  3. We start the service with the command “minikube service getting-started”. You should see a picture like this in PowerShell.

Now we already have a simple task scheduler application running in a standard browser!

Conclusions and recommendations

What we discussed above is the most basic example of using the Kubernetes technology architecture, which I tried to explain in the simplest possible terms. Deploying more complex solutions will require exponentially greater efforts and exponentially greater competencies. However, from this example you can learn to understand the logic of the solution in practice. As homework, I would recommend repeating all the steps you have gone through yourself.

And if you want to master more subtle and rare technologies, then I recommend that you subscribe to our blog and follow the updates, because with my next lessons I plan to release a series of videos on RealTime analytics.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *