Testing a domestic virtualization system: VMmanager

Previous materials:

Briefly about VMmanager

VMmanager is a virtualization platform from the Russian company ISPsystem (which relatively recently became part of the Astra group of companies). Under the hood of VMmanager – QEMU+KVM+libvirt. In his materials, the developer claims that the control server is the company’s own development. Included in the Unified Register of Russian Software.

It comes in two editions – VMmanager Hosting for hosting providers and VMmanager Infrastructure for corporate and government customers. The functional differences between the editions are reflected in the comparative table on the solution page.

Unlike zVirt, VMmanager is not positioned as a direct replacement for VMware. This is an independent product, which, nevertheless, is used in import substitution scenarios.

According to the developer, the solution has been in development for about 15 years. The history of changes and roadmap (both implemented features and planned ones) are published on the ISPsystem website. From what I found on the Internet, before joining the Astra Group of Companies, VMmanager was offered only on the Hosting market.

VMmanager testing results

Briefly about the conditions.

  • The study used version 6 of VMmanager.

  • I was not familiar with this platform before testing. As in the case of other solutions, I figured out the features of the system during the process. It is quite possible that an engineer with experience in interacting with VMmanager would have dealt with everything faster.

  • Testing took place in nested virtualization mode, so performance parameters were almost not evaluated. The main focus is on function.

Product installation and configuration

Documentation

Documentation and knowledge base are located in the public domain. The information is structured and navigation is quite convenient. There is an article with step-by-step instructions for installing a virtualization cluster. Documentation is available in two languages ​​- Russian and English.

To conduct testing, I independently received a trial version on the vendor’s website – it allows you to use the product for free for 30 days. Support specialists responded to my requests throughout the process of learning the platform.

Solution architecture

For testing, I installed VMmanager 6 in an open loop. However, VMmanager can also be installed in a closed loop/isolated environment without Internet access. This scenario requires a special installation distribution. I couldn’t find ISO for a closed loop on the website.

Unlike “Alt Virtualization” or Hyper-V, which use a management server integrated into the hypervisor, the VMmanager management server operates as an independent virtual machine or a stand-alone server.

The installation was carried out in a configuration where the management server is installed on a separate server outside of virtualization (analogous to the standalone mode of zVirt), and three servers were dedicated to virtualization hosts. Judging by the documentation, installing a management server “inside” virtualization in the form of a VM is also supported.

I didn’t find the option to back up the management server in the documentation. This is definitely a disadvantage of the solution.

The manufacturer adheres to an architecture where virtualization nodes and a virtualization server are installed “on top” of operating systems. As a VMware user, this is unusual for me.

Support for the following is declared as the OS for nodes:

I chose the most “orthodox” combination: VMmanager + Astra Linux Special Edition 1.7.4 edition “Eagle”. I was provided with the Astra Linux distribution for testing with installation and configuration instructions in addition to the VMmanager trial after contacting support.

The figure below is an excerpt from the documentation describing the architecture:

As a shared storage system, this time I deployed iSCSI storage based on TrueNAS and connected it to the virtualization nodes as a block device. I configured the connection in the operating system.

Easy to install

This was my first time installing VMmanager 6, but, unlike previous products, it took the least amount of time – about 5 minutes. Installation is performed using three commands:

It should be noted that before installation, some time was spent preparing the Astra Linux operating system.

After completing the installation process, I received a link to log in to the VMmanager web interface. When you first log in, you need to create an administrator account, after which you will be taken to the license activation window.

After activation using a token from your personal account, VMmanager unlocked all menu tabs.

Overall, this installation stage left only a positive impression of the product.

Possibility of installation in open and closed circuits

As I said above, in this testing the installation was carried out in an open circuit. However, it is possible to deploy VMmanager from an ISO image in an isolated environment. The documentation has a separate section “Installation in a closed information loop” with a description of the process.

Hypervisor management

Management is carried out centrally from the VMmanager web interface. In this case, the failure of the management server does not affect the operation of existing VMs. Minimal control is available via Virsh from nodes (stop/restart/start). In a failover cluster, if the management server is unavailable, the HA cluster function works normally and restarts the VM. The NA mechanism is independent and works at the level of the nodes themselves.

Operations with virtual machines

Installing a virtual machine from an ISO image

After installing VMmanager 6, a repository with operating systems is available from which you can create VMs. There is also a tab where you can connect your repository. As far as can be understood from the interface, this method of creating a VM is the main one in the product. This is unusual – usually the administrator creates an ISO library and installs from there.

In other words, VMmanager contains some kind of hybrid of an ISO library and VM images.

During creation, I was able to select a custom VM configuration in terms of the number and size of disks, CPU, RAM, restrictions and limits.

The first creation of the VM took 10 minutes, each repeated creation took less time. I suspect that the system caches the created VMs.

I tried to install the VM from the ISO with Astra that was previously provided to me. To do this, you had to download it from a PC or via a URL (I downloaded from a PC). Installation from ISO takes much longer than from a template.

Unfortunately, there is no ISO library.

Choosing a processor emulation method

The following emulation options are available in the VM settings:

  • By default, the QEMU virtual processor is emulated.

  • Host-model – the emulated processor will have the same function flags as the processor on the cluster node.

  • Host-passthrough – the emulated processor will exactly match the processor on the host machine cluster node and have the same feature flags. VM migration is only possible to a completely matching processor.

Ways to create a virtual machine

In total, the following methods for creating a VM are available:

  • installation from the operating system repository;

  • installation from an image prepared from another VM;

  • installing an “empty” VM and then connecting the ISO.

I tested the creation of a VM using all of the above methods. Everything worked as expected; there was no need to resort to documentation. Except once. I was unable to create a VM image with Windows OS. The documentation states that this feature is not available for Windows VMs. A little later I found a roadmap on the website in which they promise to fix it by the end of the year.

Basic virtual machine operations

Most of the basic operations on the VM are available and performed normally. Virtual machines are started and stopped, migration is performed without stopping the VM, between nodes and between clusters. You can migrate VM disks between different storage systems.

The unpleasant thing is that there is no Suspend operation.

The parameters contain more detailed information on the VM with deeper settings.

Guest Agent installation options

qemu-guest-agent is used, if there is no agent in the ISO or template, you will have to install it manually or make a script that will install it automatically after creating the VM.

Cloning a virtual machine

Cloning is performed with the source VM turned off, the clone starts normally. VM cloning is not available in Windows OS.

The function of creating a linked clone from a VM is available and works as stated. However, there are a number of limitations – linked clone is not supported:

  • changing the size of the main disk;

  • change the main disk;

  • primary disk migration;

  • changing the boot order of disks;

  • cloning;

  • creating an image;

  • OS reinstallation;

  • connecting and disconnecting ISO images.

Snapshots

The picture is created normally, both with and without memory storage.

Live migration is not available when snapshots are available; you must either delete snapshots or migrate the VM in a powered state. It was possible to restore a VM from any snapshot.

Virtual machine image

You can create an image from a virtual machine. But, as noted above, this cannot be done for Windows OS. When a VM is created, the system performs a “preparation” that causes user data and SSH keys to be deleted at the time the VM is created.

Grouping virtual machines

There are no folders as such, but it is possible to group various elements (VMs, nodes, clusters) using filters and make bookmarks. In principle, this is enough for most cases, but problems may arise if two or more nesting levels are required.

Live virtual machine migration

The test was carried out within one cluster. Migration of the VM and disk separately is performed as usual. The virtual machine did not turn off, all data was transferred “live”. At the same time, the migration wizard itself suggested a node suitable for the parameters.

Hot change resources

An upward change in resources is available for CPU and RAM. Changing RAM without rebooting is only available in 1024 MB increments and a maximum of 16 times per VM. Adding a network interface is performed without rebooting.

The unpleasant thing is that I was faced with the inability to increase the hard drive without rebooting.

Ways to connect to a virtual machine

Access to a VM via VNC/SPICE from the web interface or client works out of the box; no additional settings are required. It is possible to connect a physical USB drive to a virtual machine via SPICE.

Network settings

Various types of network interface emulation

When creating a network interface, the default type is virtio, but it is possible to change the emulation to e1000, rtl8139, virtio via the API. It’s good that such settings exist, but it would be much more convenient to do them in the system’s web interface.

VLAN on nodes in Standard vSwitch format

The virtual machine can be placed/moved on the target VLAN. In fact, the analogue of VMware Standard vSwitch is fully available. Having studied the settings made by the management server on the host operating system, I came to the conclusion that a linux bridge is used to connect VMs to different VLANs.

Line aggregation in Standard vSwitch format

The operation of network adapters in aggregation and balancing modes is supported. All standard modes are available:

Network settings in distributed switch format (Distributed vSwitch)

There is no ability to centrally configure VLANs on all virtualization hosts. Judging by the roadmap, they promise to implement it this quarter.

Support network fabric function

The network fabric is implemented in the cluster settings when selecting the IP-fabric network setting. I did not set up the factory, but from the description I understood that VMs receive addresses using a /32 mask and then routes to each VM are advertised via iBGP to the network. This is not the standard VXLAN approach that I would expect to see in a network fabric.

Storage Settings

Support for various types of disk emulation for a virtual machine

Virtio/IDE/SCSI is supported when creating a new disk. In this case, the change is available in the web interface when editing the VM disk parameters.

“Thin” and “thick” disks

Thin disks are available on file storage types (local/NFS). Essentially these are qcow2 files.

I was able to create virtual machines with both “thin” and “thick” disks on the same local storage. Thick disks are available on all types of storage.

I see the impossibility of creating thin disks on block storage as a possible problem here. This will require setting up a cluster file system and moving to the “class” of file storages.

Working with SAN storages

There is support for SAN storage systems (scsi/iscsi/FC/FCoE). The connection is made from the graphical interface by specifying the path to the block device, but the settings of the device itself and multipath parameters are performed at the operating system level of the virtualization hosts.

Having studied the settings made by the management server on the host operating system, I saw that a cluster LVM was being created to host virtual machine disks.

Working in a hyperconverged environment

I didn’t find anything in the VMmanager documentation about support for the hyperconverged deployment option, although there is support for Ceph storage.

However, the documentation indirectly hints that hyperconvergence is not supported:

Resiliency and efficiency

HA-cluster

Fault tolerance is enabled at the cluster level when there are at least three nodes and network storage. Judging by the documentation, under the hood Corosync and custom components called ha-agent and hawatch.

I tested fault tolerance by turning off one of the hosts – the virtual machines were restored on the other host, it took less than a minute. I also checked the operation of the NA with the control server turned off. It works – everything is fine with NA.

Virtual machine replication

I could not find virtual machine replication functions in this product.

Automatic balancing of virtual machines

There is balancing, it is turned on at the cluster level, after which you can individually enable or disable migration by the balancer in the VM settings. Unlike previous tested solutions, there are few settings. I couldn’t find any policies or balancing modes. I tested it by loading one of the two VMs on the node, after 3 minutes it moved to a free node.

System Settings

Role-based access

There are only three roles available: administrator, advanced user and user. Unfortunately, there is no opportunity to create your own roles. The function would be useful in practice.

Integration with external directories

You can configure integration with AD/LDAP/FreeIPA from the interface. As a result of the setup, users are synchronized into the system. After synchronization, you can log in with a domain account to manage your VMs. I checked it at my stand for AD.

Integration with mail systems

It is possible to configure user notifications by email or Telegram for certain events with VMs/nodes or for errors.

Management Server Backup

VM backup, as well as management server configuration, is available in the graphical interface.

For virtual machines, the built-in backup looks incomplete: I couldn’t find where to configure incremental backup, nothing about consistency, etc. On the other hand, full-featured support for RuBackup is announced. In the familiar VMware paradigm, advanced backup functions lie on the side of the external system, so I expect the full range of functions from the VMmanager <-> RuBackup combination.

And for the management server, the backup procedure turned out to be very simple. And, importantly, the recovery procedure is also simple and works properly. To restore I actually ran three commands:

Monitoring and ease of use of the system

Ease of use of the system

I really liked the graphical interface, simple and convenient, and most importantly, intuitive. The Migration Wizard suggests where it is best to migrate the VM and on which node how many resources are/will remain after migration. During all the testing, the interface of this product was the most convenient for me to work with.

System monitoring

In addition to graphs, the interface supports built-in Grafana. The documentation states that there are ready-made templates for Zabbix. There is a built-in notification system by mail and Telegram.

Let’s sum it up

That’s what I can say about VMmanager.

I like it:

  • Interface and ease of working with the product. I had a pleasant experience deploying the system and performing all operations in the web interface.

  • The “do your own” approach. I did not find a large amount of reused open source technologies in VMmanager. The team, judging by what we saw, is really investing in the development of the system.

  • Good documentation. There were, of course, some flaws and some inconsistency in it, but, in general, I rather liked using the documentation than not.

  • I also liked that the manufacturer published a roadmap on the website, and also that I received support by simply taking a trial from the website. I think that this can be called the “openness” of the manufacturer to its customers.

I did not like:

  • I came across “childhood sores” during the testing process. Sometimes this is an incomplete set of functions, for example, problems with Windows images, sometimes “basic” functions in the product are simply not implemented, for example, hot disk expansion.

  • As a VMware user, I am not particularly accustomed to the approach where the virtualization system is not a monolith, but an application running on top of the operating system. Although I understand that technically this does not affect anything.

  • Some “big” features have not yet been implemented: distributed switch, role model, etc.

VMmanager differs from the products tested before, and these differences are fundamental. I think that the main reason is the approach developed in-house with a minimum of open source. Would I use the system in production? I think I would conduct even more in-depth tests, particularly on performance and stability. I would check the claimed capabilities for working with 22,000+ VMs in one installation, 50+ clusters, 350+ nodes in a cluster.

The functions in the product were enough for me. Is this product promising? Definitely yes, if the pace of development continues. In addition, as far as I understand from the information on the site, VMmanager is now in the process of receiving FSTEC certificate for new requirements for virtualization systemswhich means that customers will be able to use the solution in systems with special security requirements.

If you have any questions, ask them in the comments, I will try to answer!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *