Kubernetes through a rake or implementation at a university


For a long time at a “technical” university, the process of deploying a new application was through archives with program code, which were delivered to a virtual machine via ftp and run manually. And each team had its own virtual machine for each system. Resources were not being optimally spent. A temporary solution was to create one virtual machine for all small projects, but the process of administering all this fell on developers who did not want to do this.

We have been eyeing Kubernetes for two years. We studied various articles, tried to deploy it, but after deploying it, we did not understand what to do next. Until one day we decided to try wrapping one of the systems in a container. For container orchestration, the Docker Swarm system was chosen, since it is simpler, and here the first problem arose – there was authorization in the selected system, and Docker Swarm had a problem with saving the user session if there were more than one containers (we used ADFS for authorization in the system) – i.e. e. the current user session was not saved and when the page was refreshed, the stratum one was released. The search for various solutions led to one thing – we need Kubernetes with its Ingress controller, where there are “sticky sessions” (sticky session). When choosing a distribution, it was decided to use “vanilla” k8s.

Once again, having installed Kubernetes, the search for a solution began on how to deliver our container there. The containers were assembled on a separate virtual machine and loaded into the local Docker Container Registry, and Gitlab Runner on the master was used to deploy this container to Kubernetes. Not the best solution, but there were not enough competencies for another. And when the Deployment was deployed, the question arose. How to get the container out. Since we used the Bare Metal configuration, Metal LB came up at the first request to Google. If we knew then that it was possible to use Ingress Nginx with the Host Network: True parameter, then this would save us a month of experiments with Metal LB and we knew that we could immediately abandon it. For Metal LB, an L2 configuration was used, where a virtual address pool was created, which is visible only inside the cluster. How to bring it out? Of course, install Nginx on the master and write virtual addresses in /etc/hosts so that Nginx can see them. Luckily, there was an idea in my head at the time that this was somehow wrong.

And when he learned about the Host Network, he raised a cluster of three masters and three worker nodes without Metal LB. True, the cluster was initialized through the usual user user, not root, and in order for Gitlab Runner to see the cluster, it had to be added to the user group. Again, all fault tolerance crumbles when the entire deployment goes through one master. And the search for a better solution began.

The final installation is deployed using kubeadm. The cluster includes 3 masters and 3 workers. network plugin – weave.net, and the Ingress controller is an Ingress Nginx deployed as a Daemon Set for system fault tolerance in case of failure of one worker node. In a Kubernetes cluster, applications are deployed only with Stateless, saving their data on PV volumes automatically created on storage via NFS provisioner, in separate databases (MSSQL, MYSQL, PostgreSQL) and on S3 Minio. CI/CD is based on Gitlab with docker images being collected on a separate VM and deployed via gitlab-agent and gitlab-runner installed in the Kubernetes cluster.

Final concept
Final concept
In order for Rolling Update to be used for each container, the pipeline ID is added and the container name is changed in Deployment via sed.
In order for Rolling Update to be used for each container, the pipeline ID is added and the container name is changed in Deployment via sed.
Prometheus is used for monitoring.
Prometheus is used for monitoring.
Currently, Kubernetes has 10 self-written systems and 6 auxiliary ones.
Currently, Kubernetes has 10 self-written systems and 6 auxiliary ones.

Using only part of the Kubernetes functionality, we saved server resources by transferring part of the systems from virtual machines to containers. We got fault tolerance out of the box and a simple system deployment option, where you just need to correctly build the Docker image and check that it starts. Kubernetes is the future.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *