Who is this article for?
Have you ever been handed a code or program whose dependency tree looks like a tangled circuit board?
What dependency management looks like
No problem, I’m sure the developer has kindly provided you with an installation script to get everything working. So, you run its script, and immediately you see a bunch of error log messages in the shell. “Everything worked for me on the machine”, – this is usually the answer the developer answers when you ask him for help.
Docker solves this problem by providing nearly trivial portability of dockerized applications. In this article, I’ll show you how to quickly dockerize your Python applications so they can be easily shared with anyone who has Docker.
Github and Docker repositories
But … why Docker?
Containerization can be compared to placing your software in a shipping container that provides a standard interface for the shipping company (or other host computer) to interact with the software.
Application containerization is actually the gold standard for portability.
General Docker / containerization scheme
Containerization (especially with docker) opens up tremendous possibilities for your software application. A properly containerized (for example, dockerized) application can be deployed with scalability via Kubernetes or Scale Sets any cloud service provider. And yes, we will also talk about this in the next article.
There won’t be anything too complicated in it – we are again working with a simple script that monitors changes in the directory (since I work on Linux, this
/tmp). Logs will be pushed to stdout, which is important if we want them to appear in docker logs (more on that later).
main.py: a simple file monitoring application
This program will run indefinitely.
As usual, we have a file
requirements.txt with dependencies, this time with just one:
In my previous article we have scripted the installation process in the Makefile, making it very easy to share. This time we’ll do something similar, but in Docker.
We do not need to go into the details of the structure and operation of the Dockerfile, there are more detailed about this. tutorials…
Dockerfile summary – we start with a base image containing the full Python interpreter and its packages, then install the dependencies (line 6), create a new minimalistic image (line 9), copy the dependencies and code into a new image (lines 13-14; this is is called a multi-stage build, in our case it reduced the size of the finished image from 1 GB to 200 MB), we set the environment variable (line 17) and the execution command (line 20), which is where we end.
Assembling the image
Having finished with the Dockerfile, we simply run the following command from our project directory:
sudo docker build -t directory-monitor .
Putting together the image
Running the image
After completing the assembly, you can begin to create magic.
One of the great things about Docker is that it provides a standardized interface. So if you design your program correctly, then transferring it to someone else, it will be enough to say that you need to learn docker (if the person does not know it yet), and not teach him the intricacies of your program’s device.
Want to see what I mean?
The command to run the program looks like this:
There is a lot to explain here, so let’s break it down into parts:
-d – launching the image in detached mode, not in foreground mode
--restart=always – if the docker container crashes, it will restart. We can recover from accidents, hurray!
--e DIRECTORY='/tmp/test' – we pass the directory that needs to be monitored using environment variables. (We can also design our python program to read arguments and pass the tracked directory in that way.)
-v /tmp/:/tmp/ – mount the directory
/tmp to catalog
/tmp Docker container. This is important: any directory we want to keep track of MUST be visible to our processes in the docker container, and this is how it is implemented.
directory-monitor – the name of the startup image
After starting the image, its status can be checked using the command
Docker ps output
Docker generates crazy names for running containers because people don’t remember hash values very well. In this case, the name crazy_wozniak refers to our container.
Now, as we track
/tmp/test on my local machine, if I create a new file in this directory, then this should be reflected in the container logs:
Docker logs demonstrate that the application is working correctly
That’s it, now your program is dockerized and running on your machine. Next, we need to solve the problem of transferring the program to other people.
Share the program
Your dockerized program can be useful to your colleagues, friends, you in the future, and anyone else in the world, so we need to make it easier to distribute. The ideal solution for this is Docker hub…
If you don’t have an account yet, register and then login from cli:
Login to Dockerhub
Next, let’s mark and push the newly created image to your account.
Add a label and push the image
The image is now in your docker hub account
To make sure everything works, let’s try to pull this image and use it in end-to-end testing of all the work we’ve done:
Testing our docker image end-to-end
This entire process took only 30 seconds.
Hopefully I’ve been able to convince you of the amazing practicality of containerization. Docker will stay with us for a long time, and the sooner you master it, the more benefits you will get.
Docker is all about reducing complexity. In our example, it was a simple Python script, but you can use this tutorial to create images of arbitrary complexity with dependency trees, reminiscent of spaghetti, but the end user will not be affected by these difficulties…
Vdsina proposes virtual servers on Linux or Windows. We use exclusively branded equipment, the best-of-its-kind proprietary server control panel and one of the best data centers in Russia and the EU. Hurry up to order!