Guide to Docker. From containerization basics to creating your own docker

Good afternoon Today we will talk about containerization, namely the most popular technology for its implementation at the moment – Docker. You will also be presented with vulnerabilities in the implementation of this technology.

Relevance

Observing the development of large companies, it is not difficult to notice the increasing pace of transition from virtualization to containerization. These changes are due to the fact that the current containerization technology allows you to save on hardware, it is less resource-intensive, unlike its competitor. For now, we will leave the question of the difference between these technologies open and will move on to it later. Also considering this technology in the context of information security, we can say that containerization increases application security, since each container operates in an isolated environment, which reduces the risk of an attack on other applications or the host system.

The main thing you need to know about Docker

Before you start talking about Docker, you should know the basic principles of containerization.
The term describing this technology makes it possible to fully understand its meaning. Let's start by looking at abstract examples for easier understanding. I think each of us knows what a shipping container is and a barge that transports them. Let's imagine that in each of the containers we create our own ecosystem, in one we simulate the North Pole with the corresponding flora and fauna, in the other the tropics, and so on ad infinitum. Then we place them on a barge, seemingly different climates can harm neighboring containers, but since they are completely isolated, no threats arise either for neighbors or for the barge.
Moving on to our topic, we can draw the following analogy. Each container runs its own OS with the corresponding applications, all the dependencies necessary for the full functioning of programs and services are configured in it, each container can contain completely incompatible systems both for interaction with each other and with the host system, but no conflicts, of course if configured correctly, this will not occur due to the isolation of these systems.

Now let's talk directly about Docker. The containerization technology itself originated quite a long time ago, but it was after Docker Inc. presented its product in 2014 that rapid growth and widespread implementation began.

Why Docker:

  • Rollback to previous version. You can return to the container's original state at any time, making it easier to test and fix errors. You can also reboot the container, which sometimes helps solve the problem.

  • Ready kit for launch. The container contains everything needed for the application to run, including the runtime environment. Thus, you can write an application in this environment and be sure that it will work correctly when launched on any server. Where and how the code is written does not affect its performance when using a container.

  • Small size. Unlike virtual machines, containers take up significantly less space and start up faster. Their size is usually only a few hundred megabytes.

  • Security. The container does not have access to data on the host, which increases the security of the application.

  • Ease of Management. Docker images are stored in the registry, from where they can be quickly launched. Using Docker, you can manage containers: launch, save, edit, reload. If there are a lot of containers, tools for mass management are used – orchestrators. The standard tool for Docker containers is Kubernetes. It’s impossible to talk about Docker even in passing without touching on the topic of Kubernetes. Kubernetes is a tool that automates the deployment, scaling, and management of containerized applications. It provides mechanisms for managing a cluster of hosts running containers and allows containers to be grouped into managed services that can be easily scaled. Kubernetes also provides mechanisms for container lifecycle management, including automatic scaling, automatic restarting, and resource management.

Difference from virtual machines

A virtual machine and a container are two different approaches to creating isolated environments for running applications. They both allow you to run applications independently of each other, but do so in different ways.
A virtual machine creates an environment that simulates real hardware. Each virtual machine has its own operating system that runs on top of the hypervisor. A hypervisor is software that manages the operation of virtual machines and distributes resources between them.
A container, on the other hand, uses a common host operating system and creates isolation at the namespace and resource levels. Instead of simulating real hardware, a container uses an existing operating system and adds an abstraction layer that allows applications to run independently of each other.
The main differences between virtual machines and containers are as follows:

  1. Performance: Virtual machines create a complete copy of the hardware, which requires more resources and can slow down applications. Containers use a common operating system and do not require additional resources to create isolation, so they are faster.

  2. Scalability: Virtual machines can be scaled vertically (increasing the resources of a single virtual machine) or horizontally (creating new virtual machines), but this requires additional resource costs. Containers can be easily scaled horizontally as they share resources and can be run on a single host.

  3. Management: Virtual machines have their own management and require separate tools to manage each instance. Containers use a common container manager that manages all containers on the host. Overall, the choice between virtual machines and containers depends on the specific needs of your project. If you need a fast and scalable system to run many applications, then containers are the best choice.

Docker Building Basics

It’s time to move on to the practical part of our material, but even here it’s not so simple. To know how to use Docker, let alone create it, you should understand what main components it consists of.
The Docker ecosystem consists of two types of components:

  • systemic.

  • variables. System components include Docker host (Docker server), Docker daemon (Docker daemon), Docker client (Docker client) and Docker-compose (container cluster launch manager). A Docker host is simply a computer or virtual server on which Docker is installed. The Docker daemon is a central system component that manages all Docker processes: creating images, starting and stopping containers, downloading images. The Docker client is a utility that provides an API to the Docker daemon.

The main variable components include: Dockerfile (Docker file), Docker image (Docker image), Docker container (Docker container).
Now let's look at variable components in detail.

Let's start with Dockerfile. A Dockerfile is a text file that contains instructions for building a Docker image. This file is used to define how the Docker image will be built, including installing dependencies, configuring the runtime, copying files, and more.
A Dockerfile consists of a series of instructions that are executed in a specific order. Some of the most common statements include FROM (used to specify the base image), RUN (to run commands in the container), COPY (to copy files into the container), EXPOSE (to specify ports that should be open for external access), and ENTRYPOINT/ CMD (to define the command that will be run when the container starts).
When creating a new Docker image, a Dockerfile is used to create a new container based on the base image, which can then be further customized and optimized for a specific application or task. Once a Docker image is created, it can be stored in a Docker Hub repository or other private repository for later use.
DockerFile example

FROM python:3.7.2-alpine3.8
RUN apk update && apk upgrade
COPY . ./app
RUN ["mkdir", "/a_directory"]
CMD ["python", "./my_script.py"]

Docker image
A Docker image, or Docker image, is a file that contains all the necessary information to create a container. This file is the basic building block of Docker and contains metadata such as name, version, dependent packages and commands needed to run the application inside the container.
A Docker image can be created using a special Dockerfile that contains instructions for building the image. These instructions may include installing dependencies, configuring the runtime, copying files, and more.
When you create a new Docker image, it is first downloaded from the Docker Hub repository or other private repository and then run in a container. A Docker image can be thought of as a “template” for building containers that provides consistency and repeatability when deploying applications.
It is important to note that Docker images are lightweight and can be easily transferred between different servers and cloud platforms due to their standardization and portability. Additionally, Docker images help ensure security because each container runs in an isolated environment, reducing the risk of attack on other applications or the host system.

Docker container
A Docker container, or Docker container, is an isolated environment in which applications run. Docker containers are based on the Linux operating system kernel and use cgroups technology to control resources and namespaces to isolate processes.
Each Docker container contains the full set of dependencies and configurations required for the application to run, allowing it to be completely self-contained and independent of the environment in which it runs. This provides a high degree of portability and reliability when deploying applications.
Creating a container starts with a Docker image, which is a template for creating containers. The Docker image contains all the necessary dependencies and settings, and the Docker container is an instance of this image in which the application runs.
Docker containers can be easily cloned, scaled, and managed, making them an ideal solution for DevOps and cloud-native applications.
It’s also impossible not to mention DockerHub. DockerHub is the largest Docker image repository and provides a platform for sharing, sharing and storing Docker images. DockerHub offers free and paid accounts that provide a variety of features including private repositories, tagging, notifications, and more.
Users can upload their own Docker images to DockerHub, allowing other users to use these images for their own projects. This is especially useful for developers who want to use proven and reliable Docker images for their applications.
In addition, DockerHub also provides tools to search and view information about Docker images, such as star rating, latest updates, the team behind the image, and user reviews.
To use DockerHub, you must register and create your account. You are then able to upload your own Docker images, search and download other users' images, and follow updates and news from the Docker community.

Practice

  1. Download the Ubuntu 18.10 image using the command pull(download from the registry):

  1. docker run <image> <опциональная команды, которая выполнится внутри контейнера>

  1. docker ps
    A command that allows us to view running containers, the -a attribute helps us see ever running containers, not just active ones

  1. Based on the results of the command execution, it can be understood that thanks to this type of command we have the opportunity to work with Docker and execute commands in it before executing the exit command using the attribute -it
    Now let's go through all the basic stages of becoming a docker

  1. Creating a Dockerfile

  • FROM – selection of the system in which subsequent instructions will be carried out

  • COPY – Copies a file from the main system to the container (copy the file test inside the container with the same name)

  • RUN – Executing a shell command from the container terminal (in our example, we will assign the rights to execute the script to _/test)

  • CMD – Executes this command every time the container is started again

  1. Assembly (image creation):

During the build, you can track how instructions are executed line by line, so that if an error occurs, you know where to look for the problem in your file.
–tag specifies the name of the container.
3. Result:

Main vulnerabilities

The main vulnerabilities in docker images can be divided into the following categories:

  1. OS Vulnerabilities: Vulnerabilities in the operating system on which Docker runs can affect the security of containers. For example, if the base Docker image contains vulnerable system packages, this could become an entry point for attackers.

  2. Dependencies: Vulnerabilities in third-party dependencies such as libraries and frameworks can be used to attack containers. For example, a vulnerability in a library used by an application in a container could allow an attacker to gain access to the container.

  3. Software Vulnerabilities: Vulnerabilities directly in the code of applications that run in containers can be used for attacks. For example, a vulnerability in a web application running in a container could allow an attacker to execute arbitrary code on the server.

  4. Dockerfile: Insecure instructions for building a Docker image can lead to vulnerabilities. For example, if the Dockerfile does not have the correct file permissions specified, this could result in an attacker being able to modify or delete important files in the container. It's important to note that Docker provides tools to detect and fix vulnerabilities, such as Docker Bench for Security and Docker Security Scanning. However, Docker security depends on proper configuration and vulnerability management. In order to support my words, I will give you a real example of the implementation of vulnerabilities in a certain company:

  • The attackers, taking advantage of a vulnerability in Microsoft Exchange Server, obtained from the mailbox the password for the bitrix administrative panel for the dev site, which was sent in plain text.

  • The bitrix administrative panel has a “php command line” tool, with which you can execute any php code. Taking advantage of this and using an external VPS server to obtain a reverse shell, access to the system inside the container was gained.

  • After that, the attackers carried out an escalation of privilege attack using the cve-2022-0847 vulnerability in the Linux kernel (vulnerable versions from 5.8 to 5.16.11, 5.15.25, and 5.10.102), and gained root access to the container.

  • The container ran apache and php, which required certain capabilities to work, in particular, cap_dac_read_search and cap_dac_override. By using them to exit the container, the attackers were able to gain access to the host disk.

  • Backups of the prod site were stored on the host machine, including hashes of administrative accounts in the md5.salt format, then an offline brute force was performed, and steps 2-5 were repeated in the prod environment… To sum up the above, we can conclude that correct configuration and basic security (in the form of the same passwords) are the foundation for building a secure system.

Conclusion

In this article, we covered the basics of Docker and its vulnerabilities, and discussed the importance of proper configuration and vulnerability management for container security. Docker is a powerful tool that can greatly simplify the process of deploying and managing applications, but requires proper use and constant attention to security.

Subscribe to LHMedia:

Life-Hack – Hacker / Hacking
News channel
Channel with free video courses
Humor

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *