Almost everything you would like to know about Docker

In this article, we will talk about basic techniques for working with Docker, and also immerse the reader in the basics of application dockerization.

It is assumed that the reader has heard something about Docker and would like to get acquainted with the technology. We will try to simplify this process.

Introduction

Docker is a platform that allows you to run applications in isolated containers. Containers provide applications with a stable and predictable environment wherever they run, be it on a development machine/Linux server in the cloud/Kubernetes cluster.

Docker ensures project repeatability and consistency. Thanks to this, developers can focus directly on developing the application without worrying about compatibility issues and environment setup.

To understand how Docker works, you need to understand its two basic units: the image and the container.

Containers

Containers are lightweight, isolated execution environments within which applications run.

Unlike virtual machines, containers share a common operating system kernel, making them less resource intensive. This allows you to run more containers on a single server compared to the number of virtual machines.

Containers provide many benefits, including:

  • Isolation. Each application (ideally) runs in its own, isolated environment within a container;

  • Repeatability. Docker ensures that a container that runs on the developer's machine will also run on the server, without any surprises;

  • Ease of delivery. Images can be easily transferred between different environments, be it local environments, test servers or cloud infrastructure.

Images

A Docker image is a static description of the contents of a container, including all the dependencies, environment settings, libraries, and binaries needed to run the application. We can say that the image is a ready-to-use template for creating containers.

Images are often created based on other images. This happens thanks to a layer system that allows you to create and save changes on top of the base image.

For example, you can take the official Go image and add your code to it, resulting in a new image ready for deployment (more details in the Dockerfile section).

Process of creating a new image (from the official Docker documentation)

Process of creating a new image (from the official Docker documentation)

Launching your first container with Docker

Let's assume you've already installed the Docker CLI or Docker Desktop on your system and perhaps tried to run your first hello world container with the docker run hello-world command.

Let's take a closer look at what actions this command performs:

  1. Docker looks for the hello-world image in local storage. If the image is not found, Docker will download it from Docker Hub;

  2. Next, Docker creates a container based on this image and launches it;

  3. The container runs a script that displays a welcome message and exits.

The entire described process can be observed in the terminal in which the command is executed. If everything went well, you will see a message confirming that the container was launched successfully and is working:

Hello from Docker!
This message shows that your installation appears to be working correctly.

Basic commands

Docker provides extensive capabilities for managing containers and images using CLI commands. In this section, we'll look at the basic Docker commands that will help you manage containers efficiently.

Running a container is the main action you will perform in Docker. We already launched the hello-world container in the previous section.

Now let's try to run a more complex application. For example, the official image of the Ubuntu operating system: docker run -it ubuntu bash

This command does the following:

-i (–interactive) means that the running container will receive standard input from the host and pipe it to the application running in the container. By default, containers start in isolation, and the stdin of the running application has no connection with the outside world.

-t (–tty) tells Docker to create a pseudo-terminal for the running application, allowing you to conveniently work with it from your terminal.

  • ubuntu – the name of the image that we launch Ubuntu;

  • bash is a command inside the ubuntu container.

To stop a container, use the command: docker stop

Where is the ID of the container you want to stop. You can determine the container ID using the following command: docker ps. This command displays a list of running containers along with their IDs.

If you need to restart the container, use the docker restart command: docker restart . The container ID can be obtained from the output of the ps command, but most commands that work with container IDs can also work with names.

To remove a container, you must first stop it and then use the rm command: docker rm

To stop and delete a container at the same time, you can use the -f (force) flag: docker rm -f

Docker Image Management

To download the image without running it, you can use the pull command, for example, docker pull ubuntu. This command will download the latest Ubuntu image to local storage, but if necessary, you can specify a specific version of the image: docker pull ubuntu:20.04

To see all the images available on your computer, use the command: docker images

To remove an image, use the docker rmi (remove image) command: docker rmi

You can get the image ID using the docker images command.

Dockerfile and Docker images

In this section, we'll take a closer look at what Docker images are, their role in containerization, and the process of creating your own images using Dockerfile. We'll also look at Dockerfile context and multi-stage builds.

What are Docker images?

A Docker image is a lightweight, self-contained, executable package that includes everything needed to run a piece of software, including code, runtimes, libraries, and system dependencies. Docker images serve as templates for creating containers. Images are described using a Dockerfile.

A Dockerfile is a specially formatted text file containing commands to build a Docker image. These commands describe the steps required to install dependencies and configure your application based on the application context.

The Dockerfile context is a set of files that will be sent to the Docker daemon to build the image. This is often the directory that contains the Dockerfile itself and any other files needed for the build (mostly code).

Simple Dockerfile example

Let's look at a simple example Dockerfile for a Node.js application:

# Указываем базовый образ
FROM node:14
# Устанавливаем рабочую директорию внутри будущего контейнера
WORKDIR /app
# Копируем package.json и package-lock.json в /app (./ из-за WORKDIR)
COPY package*.json ./
# Устанавливаем зависимости
RUN npm install
# Копируем файлы приложения (с хоста (контекст) в образ (/app))
COPY . .
# Открываем порт
EXPOSE 3000
# Запускаем приложение
CMD ["node", "server.js"]

Now you can try to build the application: docker build -t node-app:latest

-t tells docker to build the image with the tag
node-app – image name
latest – tag

After Docker builds successfully, you can run the application: docker run node-app

A very important note about Dockerfile: each team creates its own image layer. Because of this, images can swell to enormous sizes. To prevent this from happening, there is a step-by-step assembly.

Step-by-step (multistage) assembly

multistage build allows you to reduce the size of the final images using multiple FROM commands.

As an example, consider building a simple Go application:

# BUILD STAGE
FROM golang:1.16 AS build
WORKDIR /go/src/app
COPY . .
RUN go build -o myapp

# RUN STAGE
FROM alpine:latest
WORKDIR /root/
COPY --from=build /go/src/app/myapp .
CMD ["./myapp"]

The final image will contain only what was in the alpine image plus the myapp executable file.

Docker Hub, image repositories

Docker Hub is a repository that allows developers to easily share and manage container images.

With Docker Hub you can:

– Search and download public images provided by the community;
– Create and share your own images;
– Manage automatic builds and integrations with version control systems.

Docker Hub offers a huge number of public images, such as images of operating systems, databases, web servers and various applications. Using these images saves time and effort when configuring and deploying applications.

Master the art of loading and unloading Docker images

One of the main processes of working with Docker Hub is downloading (pull) and uploading (push) images. Let's start with how to download an image from Docker Hub.

The docker pull command allows you to download the desired image to your local machine.
docker pull ubuntu:latest

This command will download the latest Ubuntu image. After downloading the image, you can start a container based on it:
docker run -it ubuntu:latest /bin/bash

To upload images, you first need to create an account on Docker Hub and log in on the command line: docker login

After successful authorization, you can upload your own image. First make sure the image is tagged:
docker tag your_dockerhub_username/repo_name:tag

Now you can download the image: docker push
your_dockerhub_username/repo_name:tag

Docker Hub provides many pre-built images for popular tools that can make developing and deploying your projects much easier.

Let's look at a few of them:

  1. Alpine Linux (alpine) is a tiny Linux distribution based on BusyBox, its image is only 5 MB in size;

  2. PHP (php-cli, php-fpm) – images for the php interpreter, includes everything necessary for development for this language;

  3. MySQL (mysql) – it’s clear here, a well-known database;

  4. NGINX (nginx) – useful for creating a reverse proxy server;

  5. Redis (redis) is a high-performance in-memory database used for caching and session management;

  6. Node.js (node) is a JavaScript runtime required to run server-side code based on Node.js.

Moreover, almost every popular image has alpine and slim versions, which differ from the usual ones in the alpine-shaped base, as well as in reduced volume (usually, slim versions do not include tools for assembly, but are intended only for performance).

Any image from Docker Hub can be pulled using the docker pulll command. Using ready-made images reduces the time for setting up the environment.

It's also worth noting that Docker Hub is not the only image repository.

So, Gitlab (at least the self-hosted version) offers you its own repository, which is very convenient to use in conjunction with Gitlab CI.

Networks

Networking is one of the key components of containerization in Docker. Failure to configure container networking can lead to problems accessing your services.

Docker provides several networking drivers, the most common of which are bridge, host, and overlay.

Bridge

This network mode is the default. It creates a virtual bridge that allows containers to communicate with each other and with the host machine.

When a container starts, a virtual interface is created and connected to the bridge, providing the containers with IP addresses from a specific range. A bridge network allows you to isolate containers from other network interfaces of the host machine.

To connect a container to a network, specify the network name when starting the container using the –network flag

docker network create –driver bridge app_network
docker run -d –network app_network –name app nginx

Host

In this mode, the container uses the network stack of the host machine. This means that the container and host share the same IP address and ports. Host networking is useful for reducing network latency, but it reduces the isolation between the container and the host.

docker run -d –network host nginx

Overlay

This mode is mainly used in clustered environments and Docker Swarm.

Overlay networks allow containers running on different physical or virtual machines to communicate with each other as if they were on the same network. This is achieved by creating a distributed network on top of the existing physical infrastructure.

docker network create –driver overlay –subnet 10.0.9.0/24 my_overlay_network

Communication between containers is a key aspect for microservice architectures and distributed systems. In Docker, you can easily set up communication between containers using the networks you create.

Once connected to the same network, containers can communicate with each other using hostnames: docker exec container2 ping container1. This is made possible by Docker's built-in DNS service.

To list available networks use the command: docker network ls

To disconnect a container from the network, use the command: docker network disconnect

To remove a network, use the command: docker network rm

Docker Volumes and binding the container to the host file system (bind mounts)

Volumes and bind mounts are two key mechanisms for working with data in containers. They are necessary to effectively manage data, ensure its safety and availability.

Docker volumes exist to store data separately from the container. Even if the container is deleted, the data stored in the volume will remain intact, which is important when the project has already been deployed on site.

Bind Mounts are slightly different from volumes. This approach is simply mounting directories from the host to a directory inside the container. This allows containers to directly access data on the host, which is useful for development and testing environments.

When you use bind mounts, Docker does not manage the contents of the target directory. This means that changes made to files on the host will be immediately reflected inside the container, and vice versa.

Examples of using bind mount and volume with –mount in the run command:

  • volume: type=volume,src=my_volume,target=/usr/local/data

  • bind mount: type=bind,src=/path/to/data,target=/usr/local/data

You may notice that volume and bind mounts differ only in the type and value of src. In the case of volumes, you specify the name of the volume, and in the case of bind mounts, you specify the path on the host that needs to be dumped into the container.

Volumes

Bind mounts

Path on host

Selects Docker

Specified by the developer

Creates a new volume

Yes

No

Supports volumes drivers

Yes

No

The following docker run command syntax can also be used to create volumes and bind mounts: docker run -d -v /path/on/host:/path/in/container my_image when using bind mounts.

Or docker volume create my_volume && docker run -d -v my_volume:/data my_image in case of using volumes.

Now the data along the /data path inside the container will be stored in my_volume. Volume can be disabled, replaced and much more. Having created a volume once, there is no need to re-create it.

Docker Compose

Docker Compose is a powerful tool designed to make working with multi-container applications easier. Docker Compose allows you to define and run complex multi-container applications with minimal effort. In this section, we will dive into the basics of Docker Compose and its uses.

Key features of Docker Compose include:

  • Declarative description of services, volumes and networks in yaml format;

  • Manage all services specified in the configuration file using a single utility, docker compose;

  • Container life cycle management.

Let's look at an example of a simple web application consisting of a web server and a database.

Without Docker Compose, running such an application would require running a series of commands for each container, manually setting up networks and volumes. Docker Compose allows you to automate this process by describing the project configuration in a single file.

To get started with Docker Compose, you need to create a docker-compose.yml file that describes the configuration of your application. Let's look at an example file that describes two containers: web and db.

services:
    web:
        image: nginx:latest
        ports:
            - "8000:80"
        networks:
            - app-network
    app: 
        build:
            args:
                user: www-data
                uid: 33
                app_mode: development
            context: .
            dockerfile: Dockerfile
         restart: always
         image: app
         container_name: app
         working_dir: /var/www/
         volumes:
             -'./:/var/www'
         networks:
             - app-network
    db:
        image: mysql:latest
        volumes:
            -'app-db:/var/lib/mysql'
        environment:
            DB_PASSWORD: password
        networks:
            - app-network
networks:
    app-network:
        name: app-network
            driver: bridge
volumes:
    app-db:
        driver: local

Structure of docker-compose.yml
services contains a description of all services (containers) involved in the application.

web: Defines a container with an Nginx web server that will be accessible on port 80.
The syntax for “8000:80” reads like this: “host_port:container_port”. Docker will listen to port 8000 on the host and proxy it to the nginx container and back.

app: main container with the application. Described in the Dockerfile, it uses the context of the current directory (the usual dot). Makes bind mount “./” to “/var/www” in the container.

db: Defines the MySQL database container to which the DB_PASSWORD environment variable is passed. And also the database data is stored in volume app-db so as not to lose information if the container is stopped.

networks: Defines user networks, in this case the app-network bridge network, used by all containers to communicate with each other.

volumes: Defines all required volumes, in this case the volume for the database.

To start all the services described in docker-compose.yml, use the command: docker compose up (if you have an older version of Docker Compose installed, then most likely you need to run docker-compose, separated by a hyphen).

This command will build, run and link all containers described in the file. After executing the command, you will see all the logs in your stdout. By adding the -d flag, the containers will start in the background: docker compose up -d

To stop all containers and networks, use the command: docker compose down

Container management

To manage individual services, Docker Compose provides convenient commands.

  • View a list of running containers: docker compose ps

  • View all containers: docker compose ps -a

  • View application logs: docker compose logs

  • Restarting the container: docker compose restart

Conclusion

In this article, we tried to provide instructions on how to use basic techniques for working with Docker for those who are just starting to get acquainted with this technology.

Of course, there is still a lot more to tell about Docker. Write what interests you, and perhaps your comment will become the topic for our next article.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *