Docker is not what it seems

cdnnow! We met you in this post about the history of DRM for video content. Today I want to talk to you about Docker, or more precisely about what many people forget about: the differences in it for different systems. As a Docker CDN provider, all of its 50 shades are close and familiar to us. And, fortunately, our interaction with it occurs under Linux, but, alas, not everyone is so lucky.

As often happens, with multiplatform and other “fancy” words in IT, everything is not so simple. Everything has its own price, and under the hood the same tool on different systems can essentially be several different things with different operating principles and performance. And the promises of revolution hide behind them evolution, or even regression and marking time.

Linux – home sweet home

To begin with, let’s briefly repeat what is familiar: what Docker is under Linux. But not by pronouncing the same terms “understandable” to everyone, like a spell, but by looking at the essence of the technology.

Speaking habitually and by rote, as at the New Year tree in front of Santa Claus:

Docker is a “revolutionary” open-source containerization technology that appeared in 2013, immediately gaining enormous popularity with explosive growth of the audience, which has no end in sight to this day. To the point that Docker in some places becomes another bottleneck for multi-platform desktop applications, and finds its application not only in the backend.

Speaking in Russian, and not in the dialect of corporate lizards:

The idea of ​​containerization, isolation and sandboxes is not new. Both admins and ordinary Linux users are well acquainted with the frantic restoration of a broken system. From under the Live-ISO image, entering the treasured command chroot (change root), meaning “change of the root folder,” to access the fallen operating system.
Chroot for the purposes of such necromancy and other tasks has been with us for a very long time – already since version 7 of Unix back in 1979. And in BSD – since 1982. Thanks to such a doll with virtualization of access to the root system of one OS from another, one important problem was solved that distinguishes containerization from virtual machines to this day – the use of a common kernel with the main OS, rather than virtualization of the entire system entirely.

On the one hand, different versions and types of kernels are inaccessible to us, unlike virtual machines, and on the other hand, processes launched from under chroot are virtually untied from the dependencies and libraries of the main OS. Which, as you might guess, not only saves us productivity, but also eliminates headaches with dependency conflicts for testing/deployment/development and everything else we may want when interacting with specific software.

Yes, behind the loud and beautiful words about containerization and isolation lie, in their essence, clever tricks around the file system of UNIX-like systems and additional isolation of processes from each other and from the main OS, hung on top of this for security purposes.
Against this background, it’s especially funny how they like to compare Docker to virtual machines, as something revolutionary. Completely ignoring the almost half a century of history of the previous technologies on which it literally still runs.

To be fair, Docker's ecosystem of tools also helped lead to its success. It is written in Go, and is distinguished by the simplicity of writing its modules, without resorting to sadomasochism in pure Bash.

And also the ease of use of this ecosystem as a whole: from centralized repositories with containers ready for any need, to a graphical interface for those for whom the sight of a command line console causes anxiety attacks.
A small clarification: under the hood of Podman and its scary but rich brother Docker, in their runtime – runc, After all, it is not chroot that is used, but the son of my mother’s friend in matters of changing the root directory – pivot_root.

So what, you ask, is the difference? Yes, in general, none, but there is a nuance – pivot_root is more protected from privilege escalation compared to its older brother.
Even man for chroot gives us an example of how, with the help of a banal change of directory to the previous one via “cd..”, a process can escape to the main OS.

This call does not change the current working directory, so that

       after the call '.' can be outside the tree rooted at '/'.  In

       particular, the superuser can escape from a "chroot jail" by

       doing:

           mkdir foo; chroot foo; cd ..

       This call does not close open file descriptors, and such file

       descriptors may allow access to files outside the chroot tree.

Pivot_root completely isolates the process launched from under it and all its “daughters” from the file system of the main OS. Humanly speaking, “cd..” will not work directly – you will have to use more sophisticated methods to escape from the sandbox. But here we are already wandering into the steppe of FireJail and other methods of isolating and limiting processes within a system with the same root, and this is a completely different story. However, as you already know, Docker and Podman are loved by the inventors of bicycles for these purposes.

Stranger in a strange land

And here the question arises: how does all this work for us under Windows, where there is no common kernel. The system is not even Unix-like. Where and in what place should I do what chroot, what pivot_root?
But not at all, because outside of Linux and FreeBSD, the main advantage of Docker, which is usually mentioned, is lost – the refusal to use a VM. You simply cannot run it under Windows without a virtual machine. And the way Docker itself does it, and the way Microsoft suggests we do it, is another perversion.

Firstly, using a VM loses the main marketing feature, but by default Docker wants to simplify our life and decides for us that it is better to run a virtual machine with Linux under Hyper-V under Windows. And the small ones offer us to simplify our life even more, suggesting that we use WSL2, which, however, also hides Hyper-V under all its tinsel.

What's wrong with that? Abstraction. The more layers of abstraction we have, the less we understand what our programs and tools actually do, and the less ability we have to control them. Especially when this solution, like Hyper-V and WSL2, unlike VirtualBox, VMware or Orthodox KVM, is deliberately made in such a way that it is difficult to personalize anything in it.

And why is all this, at what cost?

At the cost of everything.
The thing that, fortunately, makes all this perversion uncompetitive with respect to running from a native Linux system is performance.
Windows itself is quite power hungry, plus we need to run a virtual machine with Linux from under it, and then run Docker from there. Does this affect performance? There will be no intrigue here – of course, affects.

Goodbye unwashed Microsoft – company of slaves, company of masters

But how are things going under MacOS? After all, they have a UNIX-like system, and the kernel is based on FreeBSD, but it should still be better than that of Microsoft, and, you see, even better than those red-eyed ones swarming in thinkpad(s), and not in beautiful MacBooks, Linuxoids . It is so?
Yes, that’s not true! In MacOS there is not even a pivot-root as a concept, but also a chroot. Progress and innovation with a lag of 45 years.
Although the implementation of such functionality under MacOS is completely possible, and enthusiasts have even succeeded create your own containers native to MacOS.

If someone tries to look for information on the native launch of the same Docker under MacOS, then they may happily discover the project Docker-OSX. Even from Linux we can run MacOS with almost native performance! But Docker, as in many other cases, here acts as a very heavyweight installer of the KVM virtual machine and all the configs for it, and does not launch MacOS itself.
Yes, once again a nesting doll of abstractions, but this time not Docker from Virtualka, but Virtualka from Docker.

But don’t rush to rejoice: Docker itself will again be the inside of the nesting doll.
Since the Apple rulers from Cupertino do not provide it for their OS by default, then Docker probably decided to implement their own native solution themselves? At least for x86 systems?
Of course not. They wrote their own virtual machine HyperKitwhich runs Linux, and already runs Docker on it.

It's even more fun on cars with Appli Silicon. In case you require images for x86 systems.
Docker still runs in a virtual machine, but now with Linux for ARM, and until recently, in order to run x86 images, he launched another Virtual Machine from under QEMU and, finally, the Docker image itself in it.
But to the delight of Yabloko, in October last year from beta test came out a function that adds support for the translator of Rosetta 2 processor commands. Although it is a software solution and not a hardware unit, but relative to a virtual machine within a virtual machine, it has given significant growth in performance when running x86 Docker images.

How can I sleep at night now with this knowledge?

The story with Docker is one of the many examples of how the IT industry, in pursuit of multi-platform and repeating beautiful-sounding words like a mantra, gets lost in what, why and how it does.
Docker was not a revolution – it was a natural evolution of technology. But, as we found out, outside of Linux and FreeBSD, this is not evolution, but marking time, almost 50 years ago, and at a considerable price for performance.

And yet, since Docker has made it beyond Linux, there is a demand for it. But who so stubbornly continues to inject drugs, cry and still eat cactus, in exchange receiving an impenetrable swamp of abstractions and less productivity?
Developers.
Docker is convenient not only for business and service deployment, but also for creating personal development environments, testing and one-click installation of self-host solutions.

We are in cdnnow! For this purpose, we still prefer to use Linux as a working OS, since simplicity and high performance will not be superfluous on a personal laptop.
However, we will read with interest in the comments what is stopping you from switching to Linux to use Docker, while holding you back on Windows or MacOS.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *