Starting Docker for Yoongi

Prod with docker.  Photo in color
Prod with docker. Photo in color
Experienced captain, read first

This article is intended for beginners and is an attempt in the simplest possible way to answer those questions that are either not obviously googled or described in a more complex language. Therefore, I ask you to evaluate the article for ease of perception, as if you had just begun to master docker. But in general I will be glad to any criticism!

What is containerization?

To understand the meaning of a container, it is first worth referring to such a thing as an image.
An image is a template by which a container will be created.. It can store an entire operating system in it! And it is the images that are downloaded from the eminent docker-hub. Images can be created (how to do this will be written below), deleted and even layered on top of each other (they do this when creating images), but you can’t edit an existing image in any way (here you can make a comparison with a disk image. They, according to essentially identical). Images are stored in registers and marked with tags. The latest image in the registry is automatically tagged latest. The image name format is:


This knowledge at the current time will be enough to get one step closer to container technology!

A container is an isolated environment with its own environment, settings and utilities. Simply put, this is your packaged application, ready to run.

And why all this?

It is worth starting with the fact that the container is an isolated environment and therefore safe.
So you can put whatever tools you need into your image and not worry about the fact that the server where your application will be deployed has node.js version 8, not 16.
But the application in the team should run normally as well as for all developers, as well as on a test and productive stand. Another plus follows from this – a correctly assembled image will run wherever you can run docker.

But at first it may seem that these are the same virtual machines. But no. A container, unlike a virtual machine, uses host resources rather than virtualizing them. Which sufficiently increases its performance relative to virtualization. In addition, docker is easy to scale – just run several containers.

And yet, docker has a client-server architecture, which means that you can develop on one machine, and build and run on any other.

Docker installation and first container

Docker, like any other program, requires installation. You can download it from the official site Windows and macOS users only need to install the Docker Desktop application. The site also provides instructions for users of different Linux distributions.

After you have installed docker, it will become available as a command line utility (Windows and macOS users also have a graphical interface available, but within the framework of the cur, we will only consider working with the CLI. With the habit, you will understand that this is easier and faster )

Now you can run the first image! Open a terminal and enter the command there:

docker run hello world

After that, the magic will happen (for now) and docker will display the following to you:

hello world in docker
hello world in docker

And now let’s analyze what happened here (just a translation of what is on the terminal with a few additions):

  • The docker client connected to the docker server (daemon) and gave it a set of instructions

  • The daemon started looking for the image locally and couldn’t find it. So he downloaded it from a registry called library (This is the username of docker hub. The library holds the official images that are loaded by default)

  • The daemon took the image from the register (automatically executed the pull command for you) with the latest tag (by default, it is he who is downloaded)

  • The daemon deployed the container from the image, started it, and attached the output of the container to the client

Now magic is not exactly magic, so let’s try to write something!

Assembling your first look

Examples can be found in the github repo:

In order to build our image, we need only one file – Dockerfile. I’ll show you how it works with a simple python + flask website that will return hello world when opened.

First, let’s write a small application. Create a file and enter the following code there (this is a Docker course and as part of this course I will not delve into python – a simple copy-paste of the code should be enough)

from flask import Flask

app = Flask(__name__)

def hello():    
    return '<h1 style="color: #003f8c"> Hello Docker world! </h1>'

You also need to specify dependencies. Create a requirements.txt file with the following content:


You most likely won’t be able to run the project because you don’t have python and the pip package manager on your machine. Docker comes to the rescue.

Let’s create a Dockerfile that Docker will create an image from. Any Dockerfile starts with the directive FROM , which denotes the base image.

Useless but helpful

After the term “base image”, the question may arise: are images really immutable? Exactly! When creating an image, at each stage, the docker creates a container from what happened with the previous directive, executes a command in it and saves the new image … And so on until the finished image comes out – this is one of the assembly caching methods. If you modify the Dockerfile, executed directives up to the changed lines will be taken from the cache. So it’s good practice to make a copy, load something as late as possible, and add new directives to the end of the file.

I want to use a 3.8 version python image running on Alpine Linux. There is such an image in docker hub called python:3.8-alpine. It already has everything I need – the python interpreter of the version I need and the pip package manager. We write the first directive:

FROM python:3.8-alpine

After that, I will upload the files with the code and dependencies with the directive COPY , which will transfer all my files to the container (the dot, according to the classics, denotes the current directory).

COPY . .

Now we need to install the dependencies with pip. The directive is perfect for these tasks. RUN that will fulfill any of your whim command in the container (if available).

RUN pip install -r ./requirements.txt

Now we should explain to the docker how to interact with the container. Let’s make it start the web server when the container starts with the directive cmd [“команда”, “аргумент1”, “аргумент2”]which will tell docker what to run after the container is started.

CMD ["gunicorn", "--bind", "", "app:app"]
Useless but helpful

The CMD directive can be omitted. Then the command that will be executed in the container will have to be specified manually. By the way, if you set the command manually, it will override the one specified in CMD.

That’s all – our Dockerfile is ready to be turned into an image! Let’s put it together:

docker build . -t my-first-image

The docker build command is for building images. The dot after it corresponds to the assembly path (everything in it is assembly context). When it is launched, the docker client passes the context to the daemon, which, following directive instructions, starts building the image. The -t argument specifies the name of the image. You can do without it, but then you will have to run the image by id (which can be found by entering the command docker images – docker will list images).

In the future, such an image can be given a name with the command docker tag (it also allows you to rename images and change the case in which the image is stored) while the unnamed image will remain (with renaming and changing case similarly).
After the docker builds without errors, the image is stored in the local registry, from where it can be run with the command:

docker run -it -p 8000:8000 my-first-image

We are already familiar with docker run – this command runs images, but new arguments have appeared. Let’s break them down:

  • -it are actually two arguments -i and -t. Docker allows you to set consecutive arguments without parameters, omitting the space and dash

  • Argument -i means that the docker client will connect your input to the container

  • Argument -t means that a pseudo-terminal will be allocated for container execution

  • Argument -p /:/ forwards (marks) ports. If no protocol is specified, TCP will be used. If you want to map multiple ports, you need to specify the -p argument multiple times. For instance:

docker run -it -p "8000:80/tcp" -p "5000:5000/udp" my-first-image
  • Maps container port 8000 to local port 80 via tcp protocol and port 5000 to 5000 via udp protocol

    So we launched the container. Let’s go to localhost:8000 and admire the result in the terminal)

Here's the Gunicorn!  Rainbows are not enough
Here’s the Gunicorn! Rainbows are not enough

Working! So everything is done correctly and we can try to go to localhost: 8000 /

Yes, how many henlo Words of yours can be ...
Yes, how many henlo Words of yours can be …

Opened! What does it mean? And this means that you, the reader, well done, because you managed to master this article, go through all the steps and deploy your first application inside docker!

Docker commands we have mastered

  • docker pull – download image from registry

  • docker build – build an image

  • docker tag – rename image

  • docker run – run image

  • docker images – list of images available locally

Similar Posts

Leave a Reply