How to become a DevOps engineer in six months or even faster. Part 4. Software packaging

How to become a DevOps engineer in six months or even faster. Part 1. Introduction
How to become a DevOps engineer in six months or even faster. Part 2. Configuration
How to become a DevOps engineer in six months or even faster. Part 3. Versions

Consider how to pack your code for easy deployment and subsequent execution. Let me remind you that we are here now:

Regardless of whether you are talking with your current or future employers, you should be able to clearly articulate what DevOps is and why it is important.
Provide a consistent story on how best to quickly and efficiently deliver code from the developer’s laptop to the place of deployment of the final product with the appropriate profit. We study not a bunch of disparate, fashionable DevOps tools, but a set of skills, guided by the needs of the business and relying on technical tools. Remember that studying each stage of DevOps takes about a month of training, which in total will take you six months.

Virtualization Tutorial

Remember physical servers? The same servers for which you have been waiting weeks for the approval of the purchase order, approval of the sending process, approval by the data center, network connection, OS installation and patches? These are the servers that came into our lives.

Imagine that the only way to find a home is to build a brand new home. After all, do you need to live somewhere? So wait until it is built, no matter how long it takes! It seems to be cool, because everyone gets their own home, but burdensome, because its construction takes a lot of time. Following this analogy, a physical server is like a home.

Over time, this process became annoying, and really smart people came up with the idea of ​​virtualization. They decided to run a bunch of imaginary machines on one physical machine and made each of them pretend to be a real machine. Ingenious!

Therefore, if you really need a home, you can build your own and wait six weeks. Or you can move into an apartment building and share resources with other residents. Maybe not so cool, but good enough! And most importantly, you do not need to wait for anything!

This went on for some time, and companies such as VMWare earned serious capital on this. Then other smart people decided that pushing a bunch of virtual machines into a physical machine is not enough: you need more compact packaging of more processes into fewer resources.

So, a house or even an apartment is too expensive, so maybe just try to rent a room temporarily? Moreover, I can enter and leave it at any time! This is what Docker essentially represents as of December 2018.

Birth Docker

Docker is a new technology based on a very old idea. The FreeBSD operating system contained the concept of the jail virtualization engine, which dates back to 2000! Verily, all new is well forgotten old.

And then, and now the idea was to isolate individual processes within the same operating system based on the operating system level virtualization, or “system-level virtualization”. Note that this is not the same as full virtualization, or “full virtualization,” which runs virtual machines side by side on the same physical host.

In practice, this means that the growing popularity of Docker accurately reflects the growth of microservices, an approach to software development in which software is broken down into many separate components. And all these components need their home. Deploying them individually as stand-alone Java applications or binary executables is a huge pain: the way you control a Java application is different than the way you control a C ++ application, and this, in turn, is different from managing a Golang application .

Instead, Docker provides a single management interface that allows programmers to package, sequentially deploy, and run various applications. This is a huge win, but let’s talk about the pros and cons of Docker.

Docker Benefits

1. Process isolation

Docker allows each service to have a completely isolated process. Service A lives in its own small container, with all its dependencies, service B also lives in its own private container with all its dependencies, and these two services do not conflict.

Moreover, if one container fails, then only this container will suffer.

The remaining containers will and should continue to work. Such a mechanism is beneficial to security. If the container is compromised, it will be very difficult (but not impossible!) To get out of it in order to crack the base OS.

Finally, if the container behaves improperly (consumes too much processor or memory resources), you can reduce the “explosion radius” for that container only without affecting the rest of the system.

2. Deployment

Think about how different applications are built in practice. If this is a Python application, then it will have many different Python packages. Some of them will be installed as pip modules, others as rpm or deb packages, and others as simple git-clone installations. Or, if done with virtualenv, then it will be a single zip file of all the dependencies in the virtualenv directory.

On the other hand, if it is a Java application, then it will have a Gradle Built assembly, with all its dependencies extended and scattered in appropriate places.

See what’s the matter? Different applications, assemblies with different languages ​​and different runtimes pose a problem when it comes to deploying these applications for prod. In addition, the problem is exacerbated if conflicts arise. What if service A depends on the Python v1 library and service B depends on the Python v2 library? There is a conflict here, since v1 and v2 cannot coexist on the same machine.

And then Docker comes into play. It allows you to completely isolate not only the process, but also the dependencies. It is possible to have several containers working side by side on the same OS, each of which contains its own libraries and packages that are not compatible with the others.

3. Program execution management

I note that the way we manage disparate applications depends on the application itself. Java code is written differently in the registry, runs differently, and is tracked differently than Python code. And Python is different from Golang, etc.

With Docker, we get a single, unified management interface that allows us to start, control, centralize logs, stop and restart many different types of applications. This is a huge gain in productivity, which significantly reduces the operating costs of operating production systems.

Since December 2018, you will no longer have to make a choice between the quick launch of Docker and the security of virtual machines. Lightweight Virtualization Platform Project Fireckracker, introduced by Amazon, tried to combine the best of both solutions. However, this is a new technology that is only approaching the prod phase.

Note: The Firecracker platform provides tools for creating and managing isolated environments and services built using a serverless development model. The project code is written in Rust and spreads licensed under Apache 2.0.

Firecracker offers lightweight virtual machines called microVMs. To fully isolate them, hardware virtualization technologies are used, but at the same time, performance and flexibility are provided at the level of ordinary containers. The platform is based on the Virtual Machine Monitor (VMM), which uses the KVM hypervisor built into the Linux kernel. VMM is based on the experience of a project written in Rust crosvm, which Google is developing to launch Linux on Chrome OS. At the end of 2018, the crosvm and Firecracker codebases were split, but Amazon plans to send corrections to the borrowed components to upstream.

However, no matter how good the Docker is, it also has drawbacks.

Introduction to Lambda

Firstly, running Docker still continues to work on servers that need to be prepared, patched, etc. Secondly, Docker is not 100% safe. At least it is not as secure as a virtual machine. There is a reason why huge companies working with hosted containers do this inside virtual machines, and not on bare metal. They need fast container launch times and virtual machine security!

Thirdly, no one actually controls the Docker as such. Instead, it is almost always deployed as part of a complex container orchestration structure such as Kubernetes, ECS, docker-swarm, or Nomad. These are rather complex platforms that require special personnel to work (I will discuss these solutions in more detail later).

However, if I’m just a developer, I just want to write code and ask someone to run it for me. Docker, Kubernetes and other jazz – do I really have to learn all this? I will say this: it all depends on the circumstances. For people who just want someone else to run their code, AWS Lambda cloud storage and similar stuff is a great option.

AWS Lambda lets you run code without provisioning and server management. You pay only for the computing time you consume, and when your code does not work, there is no charge.
If you’ve heard of serverless storage, then this is it. No more launch servers or management containers! Just write your code, pack it into a zip file, upload it to Amazon, and let them deal with your headache! In addition, since the “lambdas” are short-lived, there is nothing to crack them – the “lambdas” are quite safe in their design. Really great?

But there are also negative points. Firstly, lambdas can only work for a maximum of 15 minutes (as of November 2018). This means that long-running processes, such as Kafka or number cracking applications, cannot work in Lambda.

Secondly, “lambdas” are Functions-as-a-Service (functions as a service). This means that your application must be fully decomposed into microservices and synchronized with other complex PaaS services, such as AWS Step Functions. However, not every enterprise is at this level of microservice architecture.

Third, troubleshooting lambdas is very difficult. They are cloud runtimes, and all bug fixes occur in the Amazon ecosystem. This is often quite complex and unintuitive. In short, there is no free lunch here.

I note that at the end of 2018 there are also serverless cloud container solutions, such as AWS Fargate. Its mechanics are very similar to those of Lambda. If you are just starting to learn these services, I highly recommend trying Fargate, which is an incredibly easy way to get containers to work “right”. In addition, on January 13, 2019 AWS cloud services announced a significant reduction in the price of Fargate, making it a very attractive choice for launching serverless containers.

Summary

Docker and Lambda are the two most popular modern cloud-based approaches to packaging, running, and managing applications. They are often free, both suitable for various use cases and applications.

Be that as it may, the modern DevOps engineer must be well versed in both. Therefore, training Docker and Lambda are good short and medium term goals.
I note that so far we have dealt with topics that DevOps junior and intermediate level engineers should know. In the following sections, we will begin to discuss methods that are more suitable for mid-level and senior DevOps engineers. As always, there are no easy ways to gain knowledge!

To be continued very soon …

A bit of advertising 🙂

Thank you for staying with us. Do you like our articles? Want to see more interesting materials? Support us by placing an order or recommending to your friends, cloud VPS for developers from $ 4.99, A unique analogue of entry-level servers that was invented by us for you: The whole truth about VPS (KVM) E5-2697 v3 (6 Cores) 10GB DDR4 480GB SSD 1Gbps from $ 19 or how to divide the server? (options are available with RAID1 and RAID10, up to 24 cores and up to 40GB DDR4).

Dell R730xd 2 times cheaper at the Equinix Tier IV data center in Amsterdam? Only here 2 x Intel TetraDeca-Core Xeon 2x E5-2697v3 2.6GHz 14C 64GB DDR4 4x960GB SSD 1Gbps 100 TV from $ 199 in the Netherlands! Dell R420 – 2x E5-2430 2.2Ghz 6C 128GB DDR3 2x960GB SSD 1Gbps 100TB – from $ 99! Read about How to Build Infrastructure Bldg. class c using Dell R730xd E5-2650 v4 servers costing 9,000 euros for a penny?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *