Software package “Zvezda”;
PC SV “Brest”;
Today the focus is on zVirt. Let’s discuss the functionality of the solution, see how it passes this or that test, analyze its pros and cons, and discuss who this system is suitable for. Go!
Briefly about zVirt
zVirt is a secure virtualization environment from ORION Soft. Included in the register of domestic software (registry entry No. 4984 dated December 3, 2018). Positioned as an alternative to foreign solutions (primarily VMware), because includes all the necessary functions for effective management of servers and virtual machines.
As you can see, the solution appeared before 2022, when import substitution became mainstream. This is a positive aspect that speaks in favor of greater maturity of the solution.
According to the developer, zVirt implements 95% of the functionality of VMware vSphere Ent+ and vCenter. The website and presentation even have a roadmap of the solution in VMware terms:
zVirt is based in part of the virtualization hypervisor on QEMU-KVM, in part of the virtualization management system – on oVirt. But this is not “naked” oVirt; the vendor has made a number of its own improvements to the system:
backup built into the graphical interface;
integration with Zabbix;
working with video cards in vGPU (GRID) mode;
log collection (syslog) + integration with SIEM systems;
support for old equipment (since 2010);
live migration of VMs between clusters;
storage load monitoring (IOPS, capacity);
support for 2 operating systems as a hypervisor – RedOS and CentOS.
Let’s move on to the fun part – testing the product!
zVirt test results
I will not describe the algorithm and testing steps in detail – all this is already in testing methodologywhich you can contact at any time.
I divided testing into 7 sections:
product installation and configuration;
operations with virtual machines;
fault tolerance and efficiency;
monitoring and ease of use of the system.
Each section contains several checks (some of them are optional).
Briefly about the conditions.
Version 3.2 was used in testing. Version 3.3 of the product is also available today.
Testing was conducted without prior product familiarization or any training. We had to deal with all the nuances during installation and configuration. The results of an engineer who has already deployed the system many times may differ.
Testing took place in nested virtualization mode, so performance parameters were almost not evaluated. The main focus is on function.
Product installation and configuration
The manufacturer has documentation located on a special portal. The documentation is closed; without an account, you will not be able to view it. Access is granted upon receipt of a test or production distribution of the system.
Presented as a wiki. Unfortunately, there are no step-by-step instructions for deploying the product with a clear sequence. In general, all the necessary information is in the documentation, but it is scattered across separate manuals. In other words, to deploy a product you need to read almost all of the documentation, with the installation guide at the bottom of a long list.
There is also a knowledge base, but it was not useful to me during the installation process. I can say that the documentation is detailed, but it seemed to me that it was intended more for an experienced Linux administrator.
The zvirt 3.2 distribution was used for testing. It comes as a bootable ISO image and does not require any OS installation (similar to ESXi).
The installation was carried out in the Standalone configuration: a separate server was used for the management manager and three servers for virtualization hosts. However, installing the manager inside the host as a VM is also supported in Hosted Engine mode.
The figure below is an excerpt from the documentation describing the supported architectures.
Please note that the management server is a separate virtual machine or server and is not built into the hypervisor like some other solutions.
To organize shared storage, I used a shared NFS resource.
Easy to install
To complete the installation, you must obtain a distribution kit from the manufacturer. To download, you will need a login and password to the repository from which images of virtualization hypervisors in ISO format are downloaded.
In the process, I encountered the fact that after mounting the ISO image during boot, you need to have time to select “Install zvirt”, otherwise “Check ISO and install” will start, which gives an error.
Installing the distribution on 4 servers (with alternate mounting) took about 2 hours.
Installing and configuring the Engine (control manager) took a total of about 30 minutes:
In my opinion, installation will require confident knowledge of Linux at least at the administrator level.
In total, it took about 8 hours to deploy the configuration from the manager and 3 hosts + NFS from scratch (studying documentation, preparing infrastructure, installation). Since this was my first time installing zVirt, the time spent on deployment and subsequent configuration also included the time I spent reviewing the documentation and accompanying instructions. Perhaps if the documentation and navigation through it were arranged a little differently, the installation would have gone faster.
Possibility of installation in open and closed circuits
In this testing, the installation was carried out in an open loop. But zVirt Node is distributed as an ISO image and, apparently, can be deployed in a sandboxed environment. In any case, during installation and operation, nothing that required Internet access was detected.
This test focuses on the ability to control the virtualization hypervisor directly. In zVirt, the management server (control manager) is the centralized management interface. Setting up and working with a virtualization system in its absence is partially possible, but you will have to use console utilities: AAA JDBC, Hosted-engine, Vdsm-client, Virsh.
That is, the VMware philosophy with predominantly independent ESXi virtualization hosts is not visible here. If I were running zVirt in my production environment and my management server failed, I would start restoring it first.
Judging by the documentation, installing the management server in failover mode is also supported. In my opinion, this is a good option for a production system.
Operations with virtual machines
Installing a virtual machine from an ISO image
It took 1 hour to create the VM, and 10 minutes to add an additional disk.
If you have experience working with zVirt, creating a VM does not take much time. However, this was my first time doing this, so I had to figure out where to get and how to make a template, how to configure the network so that the VM correctly received an IP, and how to configure the noVNC console. Without the documentation, I would not have been able to make such settings, but after spending time reading, I figured out all the parameters.
Choosing a processor emulation method
In the interface settings there is a CPU type in the cluster parameters. I studied the documentation up and down, but still did not understand what it does and where it is used.
The CPU pass-through parameter is available in the VM settings, however, it should be noted that for a VM with pass-through, the automatic migration mode is disabled, i.e. it remains available only in manual mode. I think this is due to the peculiarity of translating processor instructions for pass-through mode.
Ways to create a virtual machine
Installation from ISO is available, but the image must first be downloaded to disk storage. I was unable to upload it there even after installing the specified certificate:
Another option is installation from a library of ready-made images. This allowed me to successfully deploy virtual machines from downloaded images.
Next, I decided to check a more advanced function – preparing a virtual machine via cloud-init when installing from a library. For CentOS OS, the preparation was successful, but for Ubuntu the VM settings were not applied, the password and network settings were not changed.
Installation from a local virtual machine image is also available.
So the basics for installing virtual machines are there, but you may have to interact with support to get them fully up and running.
Basic virtual machine operations
All basic operations on the VM are available and performed normally. Virtual machines are started and stopped, and the Suspend option is available.
A guest agent is required for graceful shutdown and restart. But this is a standard feature of virtualization systems – you cannot shut down the guest OS correctly without an agent.
Guest Agent installation options
Ovirt-guest-agent is used, on OS of the RHEL8 family and higher the standard qemu-guest-agent is used. If the ISO or template does not have an agent, you will have to install it manually.
Guest agents provide additional functionality for VMs, correct shutdown or reboot of virtual machines, and provide information about the use of resources and IP addresses. They also provide additional features:
Cloning a virtual machine
Cloning is performed without shutting down the source VM; the clone starts normally.
I couldn’t find a function to create a linked clone from a virtual machine.
But, judging by the documentation, the option to create a template and then create a linked clone from template is available. VDI systems integrated with zVirt also offer support for linked clones of virtual machines.
A snapshot without saving memory is created without turning off the VM in 20 seconds. A memory snapshot requires shutting down the VM and takes about 1 minute to create.
Live migration was performed even if snapshots were available. You cannot restore the current VM from a snapshot; you can only create a new one or clone it from a snapshot.
For snapshots to work correctly, you need to install a guest agent. It seemed to me that the functionality of snapshots and the scenarios for its use in the current version will not be familiar to the “regular” VMware user.
Virtual machine image
Image creation is available from a VM snapshot. Deploying a VM from a template is available using cloud-init. The VM starts normally, the cloud-init settings are applied correctly.
Grouping virtual machines
There is a “similarity tag” functionality. These are not exactly groups, but rather tags that can be assigned to a VM. But you cannot group VMs by them in the interface.
Live virtual machine migration
It was carried out within one cluster. Migration of the VM and disk separately is performed as usual. The virtual machine did not turn on and all data was transferred “live”.
Hot change resources
An upward change in resources is available for the CPU. Changes in RAM resources are available within the specified maximum and in increments of 1024MB. It was also successful to expand the disk and add a new one while hot.
Ways to connect to a virtual machine
There is access to the VM via VNC/SPICE, but to access via noVNC from the browser I had to install a CA certificate – it doesn’t work without it.
Various types of network interface emulation
When creating a network interface, emulation of e1000, rtl8139, virtio is available. Support for various types of emulation makes it easier to migrate virtual machines from VMware, which may not have virtio drivers in the operating system.
VLAN on nodes in Standard vSwitch format
The virtual machine can be placed/moved on the target VLAN. In fact, the analogue of VMware Standard vSwitch is fully available.
Line aggregation in Standard vSwitch format
The operation of network adapters in aggregation and balancing modes is supported. All standard modes are available:
Network settings in distributed switch format (Distributed vSwitch)
It is possible to centrally configure VLANs on all virtualization hosts. In essence, this is a kind of analogue of Distributed vSwitch. In any case, it achieves the same goals of simplifying administration as Distributed vSwitch.
Another option is Open vSwitch (OVS). OVS is provided for your information. The function is experimental, it is not recommended to use it in a production environment, technical support is not provided, so it has not been tested.
Support network fabric function
It is potentially possible to implement a factory using OVS, but given the comment in the previous paragraph, this has not been tested. For a productive environment, I would prefer to build a fabric around networking hardware rather than virtualization hypervisors.
Support for various types of disk emulation for a virtual machine
When creating a new disk, Virtio/IDE/SATA is supported. Similar to network interfaces, the ability to select the emulation type is a useful option.
“Thin” and “thick” disks
Thin and thick disks are available on a variety of storage types. Disk allocation occurs as standard from the graphical interface.
I was able to create virtual machines with both “thin” and “thick” disks on the same storage. The function definitely works.
Working with SAN storages
There is support for SAN storage systems (scsi/iscsi/FC/FCoE). The connection is made from the graphical interface; settings at the operating system level of the virtualization hosts are not required.
Working in a hyperconverged environment
zVirt supports hyperconverged deployment with Gluster. Instead of connecting zVirt to external Gluster storage, you can combine zVirt and Gluster in one infrastructure. This scenario has not been tested since it requires a “reassembly” of the stand.
Resiliency and efficiency
The fault tolerance policy is configured at the cluster level with a minimum of two hosts. The hosts must have access to the data domain on which the virtual machine resides. There are 3 modes available to choose from:
It is possible to optionally enable fault tolerance individually for each virtual machine. I tested fault tolerance with the “Minimum downtime” policy, turning off one of the hosts – the virtual machines were restored on the other host quite quickly, considering that NFS storage was used. NA – it definitely works.
Virtual machine replication
I was unable to find virtual machine replication functionality.
Automatic balancing of virtual machines
The management manager provides five default scheduling policies:
cluster maintenance (Cluster_Maintenance);
even distribution (Evenly_Distributed);
not assigned (None);
energy saving (Power Saving);
even distribution of VMs (VM_Evenly_Distributed).
These policies are selected in the cluster settings. It is also possible to create custom policies in the system settings:
The dynamic balancing function (analogous to VMware DRS) is implemented in the system. Flexible adjustment of dynamic balancing is also possible.
Ready-made role options are available, and you can also create your own roles and users. Granular permissions configuration allows you to define granular permissions and assign each user sets of permissions that are necessary and sufficient for the required tasks within his role.
Integration with external directories
Setting up work with an external directory is not available from the graphical interface; this can be done from the CLI console on the management server by first installing the “ovirt-engine-extension-aaa-ldap” package. Supported LDAP/LDAPs types out of the box:
389ds RFC-2307 Schema
IBM Security Directory Server
IBM Security Directory Server RFC-2307 Schema
Novell eDirectory RFC-2307 Schema
OpenLDAP RFC-2307 Schema
OpenLDAP Standard Schema
Oracle Unified Directory RFC-2307 Schema
RFC-2307 Schema (Generic)
RHDS RFC-2307 Schema
I tested it with the IPA directory, after configuration a new login profile appeared, but setting up groups and roles for users was not easy, because The instructions from the link in the documentation turned out to be unavailable:
Integration with mail systems
It is possible to configure user email notifications for certain events in the virtualization environment. To do this, you need to configure the notification agent from the CLI using an SMTP mail server.
Management Server Backup
Management server configuration backup is available in the GUI. You can create a backup, set up a schedule and connect ssh storage, which is described in great detail in the documentation:
However, for some reason the process of restoring from a backup was not included in this section. Further searches in the documentation led to the following instructions:
Under the cut there is a lot of text with instructions on setting the parameters necessary for restoring the system, setting up the host, rebooting and disabling various services, in total there are instructions of 16 steps to restore from a backup to the previous state.
In my opinion, the procedure is non-trivial and is not always successful, judging by the knowledge base of known errors. There is a possibility that recovery may take longer than planned.
Monitoring and ease of use of the system
Ease of use of the system
In general, working with virtualization in a graphical interface is always more convenient than without it. At first glance, the zVirt interface looks simple, however, while working in it, I was not always able to quickly find some sections that I had previously been to. Most likely, this is due to little experience with zVirt. In such cases, I resorted to documentation, but even there
the devil will break his leg it was difficult to find what I needed. It is not clear why logically related articles are scattered across different sections.
When it comes to managing virtual machines, it is quite convenient, there is search and sorting, but the lack of customizable filters and the ability to select all machines will likely make it difficult to work with a large list.
I really liked the flexibility of setting up notifications for various events in the system, errors, tasks with the ability to send both by mail and to the notification block in the interface.
Let’s sum it up
So, let me try to summarize my experience with zVirt.
What I liked:
Installation from the distribution kit, no need to install anything separately. Operates in a closed circuit.
High level of customization and a large number of settings.
A wide range of functions, the solution has quite a wide range of capabilities. I would agree with the manufacturer’s assessment that “95% of the functionality of VMware vSphere Ent+ and vCenter is implemented.”
Self-service portal. It is simple and informative. The user (if he has the appropriate rights) can change VM parameters – CPU, RAM, disk size, network adapter settings and cluster network. You can reinstall the OS from ISO and the ability to change the boot order of the OS if there are several disks. There are basic operations with the VM: turning on, shutting down, rebooting, stopping, console
What I didn’t like:
Very voluminous documentation with inconvenient navigation, which will have to be studied, starting with the description and annotation. zVirt offers quite a lot of settings, so it is highly advisable to understand which parameters do what, etc.
Inconvenient interface. It looks simple at first glance, but the menu has a lot of levels and items.
Quite a high entry barrier. To start using zVirt, you will need solid knowledge of Linux or a specialist with similar skills. It may be possible to figure out the installation and configuration in a month/several weeks without this, but only a strong Linux administrator can support zVirt.
There are many unobvious points. For example, network settings. For a long time I could not understand how they work. At first, my VMs were created without an IP address.
What conclusion can I draw? zVirt is a solution for classic server virtualization. Definitely suitable for those who import VMware. It’s not for nothing that the vendor even translated the product roadmap into VMware terms.
My opinion is that the successful formula for using zVirt is knowledge of Linux (RHCSA or LFCS level) + taking courses on zVirt + a couple of weeks of laboratory work + the availability of technical support from the vendor. When used, the system can be successfully operated, including VMware import substitution scenarios.
If you have any questions, ask them in the comments, I will try to answer!