Managing the infrastructure of even the average organization is a daunting task. A large number of servers require constant attention. Installing updates and deploying new systems are all laborious tasks. Let’s talk about how you can automate the execution of these tasks.
Hell of manual configuration
The issue of installing and configuring software can be approached in different ways. The most common configuration method is, of course, manual configuration. We ourselves manually drive in all the commands, look at the result and … re-enter commands to correct errors.
With manual configuration, the human factor plays a significant role. For example: “Oops, forgot to restart” error, or installing the wrong version because: “I forgot to add the MongoDB repository and installed the old version from the OS repository.”
Installing many cumbersome applications sometimes requires the execution of a whole chain of commands, and a failure in the execution of one of them can lead to the fact that the system remains in an incomprehensible state when we do not know which components have already been installed and which have not yet. Of course, if you have a virtual infrastructure and you prudently took a snapshot, you can simply roll back to the previous state, but otherwise you will have to study the installation logs for a long and tedious time, looking for which command and why it was not applied and what to do with it.
Snowflakes and Phoenixes
Separately, I would like to pay attention to the types of servers that the infrastructure usually consists of. In any organization, there are usually two types of servers: Snowflakes and Phoenixes. The first is the use of a set of installed software that is unique as a snowflake. Usually such nodes are the most critical and are under special monitoring. The second – how the Phoenix bird can be reborn from the ashes, that is, they have a typical configuration, predictable behavior after installing the necessary software.
Examples of the first type of servers are various DBMS and application servers. The second are test and training machines that are not a pity to lose and are easy to restore.
Support for servers of the first type is a very laborious task, since it is necessary to constantly update backup copies in order to have an image with the necessary set of installed components at hand. The lack of an up-to-date backup entails the need for a long and lengthy process of installing and configuring individual system components.
Roasting and baking
There are two diametrically opposed approaches to creating ready-made images of virtual machines. The first involves creating a minimal image of a virtual machine with subsequent configuration after launch (the Fry method), and the second, on the contrary, suggests placing all the necessary components in the image (Bake).
Consider the advantages and disadvantages of each of these methods. In the first case, we prepare a virtual machine with minimal settings, thereby saving time on preparing the image and storage space, since the minimal image obviously takes up less space. However, after deploying such an image, you need to spend time on additional settings, which significantly reduces the deployment speed. In addition, updating such images requires access to the Internet or your own repository.
The Bake approach suggests including all necessary components and settings at the stage of image creation. Thus, we can significantly speed up the deployment of images. In addition, it is possible to prepare different images for different versions. The disadvantages of this approach is the need to spend considerable time preparing and then updating the images. Also, more storage space is required to store images.
In practice, a middle option is usually used, in which some of the components and settings are included, and some are supplemented after the image has been deployed.
life in the clouds
When working with cloud services, the customer is usually offered ready-made images of operating systems. This image already contains all the necessary components to run on the hardware provided by the cloud provider.
There are several solutions for automating the virtual infrastructure management process. Let’s start with Ubuntu’s MaaS (Metal as a Service) solution. This open source solution is a system for working directly with hardware. This tool is very useful for regular enterprises when managing a virtualized infrastructure. With MAAS, you can work with operating systems such as Ubuntu, CentOS, Windows, and RedHat.
Another fairly well-known tool for working with both regular and virtual machines is Razor from Puppet. How Razor works is as follows: after creating virtual machines with Razor, it will boot via PXE from a special Razor microkernel image, then register, provide Razor with inventory information, and wait for further instructions.
Razor has administrator-configured rules that determine which types of tasks should run on which node. Based on these rules, the execution of tasks begins on the new node. Razor can report the status of these tasks to Puppet, vCenter, and other management systems.
And another tool designed to work with Kubernetes clusters is the Cluster API. The Cluster API project allows you to automate cluster lifecycle management using APIs and Kubernetes-style templates.
Script ‘Em All
A typical situation, which I think is familiar to many, is when you need to install an application according to a checklist or instructions, but during the installation process it turns out that you need to apply an additional update, as a result of which the further installation process does not completely coincide with the instructions and turns into a living hell.
The first thing that comes to mind for admins and engineers in such a situation is writing scripts to automate installation and basic configuration. We spend time writing and debugging a script, start using it, and then it turns out that a new version of the software has been released, an update, or you need to change the settings, and as a result, script rewriting turns into an endless process. Of course, you can already live with this, but you must admit that this is not the most efficient use of time.
An alternative scenario is when we, on the contrary, once having written the desired script, then use it in its original form as long as possible, and then write a new one and also use it without making any changes as long as possible.
The option of managing infrastructure using scripts can certainly make life easier for DevOps engineers and administrators, but only in small organizations where the number of servers is no more than 20, and the number of user nodes does not exceed a hundred.
Larger networks require more sophisticated solutions such as Infrastructure as Code.
IaC, Infrastructure as a Code
The concept of Infrastructure-as-Code (Iac) appeared in 2006, when the AWS Elastic Compute Cloud service was launched and the Ruby on Rails framework appeared. With the advent of new tools for solving the problems of automating administration, the Infrastructure as Code approach has emerged.
IaC is an approach to managing and describing data center infrastructure through configuration files. The main advantages of infrastructure as a service are: price, speed and risk reduction. The concept of price here is considered as capital and operating costs. If capital costs involve the purchase of additional equipment and software, then operating costs are not only the cost of maintaining the system, but also the time spent on routine operations. Infrastructure automation makes better use of existing resources. In addition, properly configured and well-tested automation minimizes the risk of human error.
Otus’ blog has covered in detail the use of HashiCorp’s Terraform as an IaC system, so we won’t go into that technology here.
Another solution from HashiCorp for creating and managing VMs. Depending on which virtualization environment is used, we can use Vagrant to deploy virtual machines in the required configuration, run the necessary tests, and then delete them. This will be much faster than manually creating a VM. The following virtualization environments are supported: Virtualbox, VMWare, Hyper-V, Docker, and cloud providers.
I also note that in most Otus courses Vagrant is used for practical work.
So let’s install Vagrant on Linux.
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install vagrant
Next, you need to initialize. Since we need to create a separate configuration for each project, we create a vagrantfile file in which the virtual machine configuration is described in Ruby.
vagrant init ubuntu/xenial64
As a result of executing this command, we will get a configuration file and a loaded operating system image under the name ubuntu/xenial64.
To create and start a virtual machine, run the command
and to stop and restart respectively
Well, when the need for a virtual machine has disappeared and you need to delete everything, you can use the command
One of the most important automation tools used in Vagrant are plugins. In the repository https://github.com/hashicorp/vagrant/wiki/Available-Vagrant-Plugins You can find many different plugins for working with virtual machines.
As an example, let’s install the vagrant-vbguest plugin. This plugin allows you to synchronize the contents of folders with code when updating virtual machines. To install, run the command
vagrant plugin install vagrant-vbguest
As mentioned earlier, the Vagrantfile configuration file is required to work with virtual machines. But, in addition to the ability to deploy their own virtual machines, developers also offer to use ready-made images (box) that are located in the Vagrant Cloud (https://app.vagrantup.com/boxes/search). To do this, you need to use commands like:
vagrant init автор/box
In this article, we looked at several solutions designed to automate the work with the infrastructure. Starting with manual configuration and scripting, we moved on to IaC and Vagrant. Of course, this is not a complete set of automation tools, and in the next article we will talk in detail about Ansible and infrastructure management using this solution.
Now invite everyone to free lesson DevOps practices and tools course, where we will analyze how docker works with data and networks, learn about the concepts of Storage and Network Drivers. We learn about the important subtleties and limitations when working with them. Let’s get acquainted with the docker-compose tool and consolidate the knowledge gained in practice.