This is the first article in a series. Introducing Docker. If you haven’t worked with Docker before, we will tell you what it is.
What is Docker?
Docker is a DevOps tool for containerizing services and processes … Wait … Wait … Wait! What is DevOps? What is containerization? What services and processes can I “containerize”? Let’s start from the very beginning.
DevOps can be understood as a concept that brings together teams of developers and administrators. Simply put, developers are the people who write the code, build the application, and administrators are the engineers who are responsible for delivering the application, allocating resources for it, backing up data, checking quality, monitoring, and so on. Thus, a DevOps engineer – a specialist who creates a bridge between them.
Container is nothing more than a process that runs in isolation in the operating system. It has its own network, its own file system, and dedicated memory. You might be thinking, why not just use a virtual machine? Well, a virtual machine is a separate OS heavily loaded with many other processes that you may never need, instead of virtualizing the entire operating system to run one service, you can virtualize the service. More precisely, you can create a lightweight virtual environment for a single service. These services can be Nginx servers, NodeJS or angular applications. And Docker helps us with that.
The name Docker comes from the word dock. The dock is used for loading and unloading cargo on ships. Here we can draw a simple analogy, the cargo can be containers, and the ship can be our operating system. All goods in the cargo are insulated from the goods of other cargo and the ship itself. Likewise, in Docker, a single Docker container process (Docker container) is isolated from the process of another container and the operating system itself.
How containerization works
Docker uses technology Linux containers (LXC) and Linux kernel mechanisms… Since the docker container does not have its own operating system, it relies on the host operating system. A container created on Linux can run on any Linux distribution, but cannot run on Windows, and the same goes for an image created on Windows. Docker extends the capabilities of LXC but also uses control groups (cgroups)which allow the host kernel to partition resource usage (CPU, memory, disk I / O, networking, etc.) into isolation levels called namespaces (namespaces)…
How do I create a Docker container?
To create a Docker container, we first need to create an archive containing all the files and dependencies required for our project. This archive is called Docker image (Docker Image). It is important to remember that once a Docker image is created, it cannot be changed or modified.
A large number of ready-made images can be found in DockerHub, a public Docker repository that allows you to share your images or use images made by other people. You can also create your own images and push them to your private repository (for example, Harbor). In the future, this image will be used to create containers. The same image can be used to create one or more containers using Docker-CLI. What is the Docker CLI, you ask?
Consider the Docker architecture,
Docker daemon listens for Docker API requests and manages all Docker objects such as images, containers, networks, and volumes. This is the main Docker service that is required for containers and other Docker components to run. If the Docker daemon stops working, all running containers will also stop working.
Docker daemon also provides REST API… Various tools can use it to interact with the daemon. You can also create an application to work with the Docker REST API.
Docker-CLI it is a command line tool that allows you to communicate with the Docker daemon via REST API.
Docker provides several modes of networking. You can read more about the operation of the network in our article. A container network is not difficult…
Host networks – The Docker container will use the host’s network, so it will not be isolated from the network point of view, this will not affect the isolation of the container as a whole, for example, the isolation of processes and the file system.
Bridge networks – Allows you to isolate your applications, but they can communicate with each other and receive traffic from the outside if port forwarding is enabled.
Overlay networks – Overlay networks connect multiple Docker daemons together and allow Docker Swarm services to communicate with each other. Docker Swarm (Kubernetes analog) can be used if you have multiple Docker servers.
Macvlan networks – Allows you to assign a MAC address to the container so that it appears as a physical device on your network.
None – There is no network, so you will not be able to connect to the container.
Still, why use Docker?
As a developer, you can easily package your project with all its dependencies and files, create an image for it, and still be sure that it will work on any Linux distribution.
Your application is easy to deploy, since a docker image created in Linux can work on any Linux distribution, the startup procedure will also not change from the choice of distribution.
You can limit the resources consumed by the container – CPU and memory, this will allow you to run more containers on one server.
The ability to run multiple containers from one image will save disk space and in the image repository.
You can write a script that will monitor the state of the container and automatically start a new one when problems arise.
You can transfer your image to colleagues from the testing team, they will be able to create multiple instances of the application (containers) from this image and run the necessary tests.
Instead of a conclusion
This is only the first article in a series of introduction to Docker. In the following articles, we’ll cover working with the Docker command line and creating your own images.