raise a local cluster and deploy applications to it

Hello! My name is Pavel Agaletsky, I am the leading developer of the Platform as a Service unit in Avito.

Kubernetes is one of the most popular tools for deploying applications and services. At Avito, we use it not only in production, but also as an environment for running services locally on developer machines. In this article, I discuss in detail how to set up a small Kubernetes cluster on your computer using publicly available tools and deploy simple applications.

Preparing for work: installing a virtual machine

Before starting work, I need to install a virtual machine on the MacBook on which we will run the cluster. This can be done using a special tool – Colimawhich is installed by the command brew install colima. This will enable Kubernetes inside the running machine. Team colima start --kubernetes --network-address will make the machine accessible and assign it a network address.

After downloading the images and all the necessary components, let's launch the environment:

colima start \
--kubernetes \
--network-address

Let's find out its status:

colima status

You can find out the status using the command colima status. Here it is like this:

Installing Kubectl

Kubectl is a Kubernetes utility that interacts with the cluster through its API. To install it you need to run the command brew install kubernetes-cli.

Using the API, you can make various changes to the cluster itself and to the services running in it. One of the main commands used for this is kubectl get. It shows a list of resources of a certain type, for example name space. They are needed to group other types of resources within themselves and manage them, for example, to set access rights to them by other entities in the cluster, including user accounts. The list of namespaces in your cluster can be found using the command kubectl get namespace or its short version kubectl get ns.

Launch Pod

A Pod is the smallest unit that can be run on Kubernetes, consisting of containers. I'll show you how to run a Pod inside a newly created cluster, using the example of a small Go application that consists of a single file main.go.

The application is a simple web server that responds to a request "hello" message "Hello World!".

package main

import (
	"fmt"
	"log/slog"
	"net/http"
)

func main() {
	http.HandleFunc("/hello", func(w http.ResponseWriter, r *http.Request) {
		slog.Info("Received request to /hello endpoint")

		w.WriteHeader(http.StatusOK)
		fmt.Fprint(w, "Hello World!")
	})

	slog.Info("Starting server on port 8890")

	err := http.ListenAndServe(":8890", nil)
	if err != nil {
		slog.Error("Application finished with an error", "error", err)
	}
}

In addition, it displays a message in the log about its launch and the receipt of the request. To check, run the application locally with the command go run main.go.

When the application has written to launch, you can make a request to its API. To do this, in a separate console, enter the command curl localhost:8890/hello. The application responded to the request and displayed a message:

Now let's run the application in the cluster. To do this, you need a Dockerfile, which describes how to package the application into a container. Here are the contents of the file:

FROM golang:1.21-alpine

COPY . /app

WORKDIR /app

RUN ls -la . && \
    go install . && \
    which kubeapp

ENTRYPOINT ["/go/bin/kubeapp"]

Together with the Kubernetes cluster, Colima launched a docker host, which can be used to assemble the application into a container. I run docker build with the command docker build -t kubeapp . By using -t I denote the name of the application, and the dot at the end means that I am collecting it from the current context. The result can be viewed using the command docker images — the container really came together.

To run the application, you need to enter the command:

docker run --rm -p 8890:8890 kubeapp
  • rm— a flag that requires the container to be deleted after completion.

  • 8890 is the port on which the running application will be available.

  • kubeapp — image of the application.

The log says that the application has started, but I will check its API again. You can access it in the same way as if you were running locally: curl localhost:8890/hello. This way of accessing the application works because Colima forwards container ports to our local machine.

Now I launch the application directly in Kubernetes – for this I need a resource with the deployment type. This is another type of resource that allows you to launch several pods at once and control their number. It also determines what and how will happen to the pods if they are stopped or restarted.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubeapp-example
  labels:
    app: kubeapp
spec:
  selector:
    matchLabels:
      app: kubeapp
  replicas: 2
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: kubeapp
    spec:
      containers:
        - name: app
          image: kubeapp
          imagePullPolicy: Never
          ports:
            - containerPort: 8890

To apply deployment to Kubernetes, you need a special command kubectl apply -f deployment.yaml. -f here indicates the use of a specific file.

The utility reports that the deployment has been created. Let's try to look at it using the command kubectl get + название только что созданного deployment.

In deployment I specified that I wanted to launch two replicas of the application. I check if this is true with the command kubectl get pods. Then I can look at the application containers with the command docker ps.

I see running containers that are part of pods.

The main difference from the case when the application was run simply in a container is that if you delete one of the pods, it will be automatically recreated by Kubernetes.

To test this, I delete one of the pods with the command kubectl delete pod + название пода.

I received a message that the pod was deleted and I'm checking what happened in the cluster.

I see that I have two pods, but one of them replaced the one I just removed.

We gain access to the application and create a service

Now we have an application running and I want to access it through the VM's IP. For this:

  1. I find out his Colima status with the command colima status.

  2. I see the IP address.

  1. I'm executing the command curl.

The application does not work because in this case the pods and their ports are not forwarded to the virtual machine by default.

To fix this, I can tell Kubernetes that I want to expose deployment to the outside using the directive kubectl expose -- type=NodePort deployment / его название. NodePort means that I want to make the deployment available on one of the virtual machine's ports.

The utility created a service, another resource type inside Kubernetes. It determines how we access various types of other resources. You can look at it using the command kubectl get service or its short version kubectl get svc.

Then I make a request to the machine again, but this time we will use the port that Kubernetes assigned for the service:

Looking at the logs inside the cluster

Now the service is available on the specified port. You can view his login using the command kubectl logs. It allows you to see the logs of any pod within the Kubernetes cluster. I'll try to execute a command for any of the application pods, for example the first one, with the command kubectl logs pod / название пода:

This command also has a mode for following logs, which allows you to receive them as the pod writes from it to its file stdout. To enable the mode, add -f. Now the command waits for something to be written to the log:

We make several new requests and see that information about them is actually displayed in the log:

Changing the number of replicas of a running application

The number of replicas of an already running application can be reduced or increased with the kubectl scale command. The –replicas= flag shows the number of copies

After this, I see that there are more running pods – I specified five in the command.

The number of replicas can be reduced using the same command, for example to one.

This set of tools is enough to start working with Kubernetes, and also to feel more confident when working with a real production cluster.

And in one of the following articles we will look at ways to debug applications running on Kubernetes. See you!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *