What's new in zVirt 4.2 release?

The Infosystems Jet team is on the line. We are constantly testing software and hardware in our lab Jet RuLabto offer customers only proven solutions and help vendors improve their products.

Recently we got our hands on a fresh version of the virtualization platform zVirt 4.2 from the Russian developer Orion soft. Product team recently announced a new release, and we've already tested its main features. All the details are under the cut.

oVirt and previous version zVirt

As the vendor states, zVirt includes all the necessary functionality for efficient management of servers and virtual machines, catching up with the solutions of leading international companies.

The solution is based on the Open-Source platform oVirt. The developers took it as a basis, created their own solution zVirt, which they are actively developing. The previous version 4.1 is distinguished, for example:

  • Manage SDN and micro-segmentation from a graphical interface;

  • log collection (syslog) + integration with SIEM systems;

  • manual and automatic (scheduled) backup of the management manager database;

  • virtual infrastructure diagram;

  • virtual infrastructure status reports;

  • VM replication at the hypervisor level and automation of DR plans;

  • Converting VMs from VMware to zVirt.

In the last release, Orion soft focused on three main improvements:

– Replication and disaster recovery

This is the first version of the product with support for replication and disaster recovery. Replication is implemented only at the level of virtual machines over an Ethernet network, similar to VMware vSphere Replication. Support for working with replication at the storage system level is not yet available – we hope that the developers will add it in future releases.

The operation of software VM replication and disaster recovery solutions is implemented based on the following components – a controller, which is responsible for the VM replication process and recovery plans, and agents:

  • a sending agent that monitors the status of the primary site, the availability of the VM, and is responsible for the transfer of replicated data;

  • a receiving agent located on a backup site, responsible for creating a copy of the VM from the primary site.

The listed components are implemented as ready-made virtual machine images, and management of replication and recovery plans is available from a single graphical web interface.

– Converter that provides VM migration from VMware vSphere virtualization system to zvirt

The migration tool is implemented architecturally in the same way as replication: the solution is headed by a controller and a pair of agents. The controller is deployed in the zVirt virtualization environment and ensures the migration process, as well as the preparation and launch of migrated virtual machines.

– SDN network management. The developers have made several changes to this functionality:

  • the ability to manage VM IP addressing;

  • fault tolerance of the virtual router;

  • saving SDN configuration as a separate file when creating a zVirt backup;

  • Monitoring the status of a virtual router via a web interface.

zVirt 4.2: What kind of beast is this?

In the new version, the vendor released about 30 updates. In our material, we will go through the top ones, among which we highlighted:

  • Keycloak support;

  • Hosted Engine VM duplication mechanism;

  • live migration between hosts and clusters;

  • consolidated snapshots from all storage domains.

Keycloak Support

In the new release, the vendor focused on expanding the functionality of working with user accounts to manage their authorization and authentication through the interface. Now, when installing/updating the system, in addition to the standard utility for working with AAA JDBC accounts, the administrator can use Keycloak as a provider. Moreover, this solution can act as an external service that is already in the infrastructure, or as an integrated service.

In addition to the basic functionality, the developers have added:

  • Keycloak configuration backup;

  • account lockout options and login notifications via the interface;

  • notifications about the last successful/unsuccessful login when authorizing on the administrator portal;

  • Automatic blocking of user accounts that have not been used for a certain period of time.

In our testing, we used the integrated Keycloak service. After installation and basic configuration of the service, the following section appeared on the main screen:

Let's prepare an internal Keycloack user and try to add it to the Hosted Engine (HE). At the first stage, we create a user group.

Then we create a test user user01 and set a password.

Set the “on” flag for the “Temporary” parameter. This is necessary so that the user can create their own unique password when they first log in to the system.

Once a user is created, it needs to be added on the Hosted Engine management server side.

Add a user, assign the necessary rights to it, then use this account to connect to the HE management server. Connected successfully, no problems were found.

Besides this, there are additional options. In the screenshot below we see a whole list:

These options are user-specific and can be set as default for all new users.

Keycloak has a password policy. We show how you can manage your password — here are some screenshots:

Keycloak makes it possible to use a second authentication factor, which greatly strengthens the information security policy.

In addition, the vendor has provided compatibility with such OTP generation applications as:

  • Google Authenticator;

  • Microsoft Authenticator;

  • Ya.Key;

  • FreeOTP;

  • Indeed;

  • Multifactor.

Now let's see how it looks in the product. The OTP policy settings look like this:

An interesting feature that immediately stands out is the independent registration of users in the system. How to do this? To do this, activate the desired option in the settings:

As a result of enabling this option, a “Registration” link appears on the portal:

The registration form itself looks quite familiar:

The next important feature in the functionality is working with directory services.

The LDAP storage provider allows you to configure integration with directory services such as:

  • FreeIPA;

  • SambaDC;

  • Astra ALD Pro;

  • RedADM;

  • Active Directory.

List of providers that are available for connection via LDAP protocol:

To connect to the main directory services, we use connectors:

  • for FreeIPA – Red Hat Directory Server;

  • for SambaDC – Active Directory;

  • for Astra ALD Pro – Other;

  • for RedADM – Active Directory;

  • Active Directory – Active Directory.

As part of our testing, we connected the Active Directory domain. We add it to the “User Federation”:

After that, we create a domain user ldapuser01 in Keycloak.

Next, we successfully migrate the user from Keycloak to the HE management server. It looks like this:

We check the obtained result on the main page of the server:

We are redirected to the welcome page with the authorized domain user ldapuser01:

That's probably all for Keyсloаk, let's move on to the next point.

Hosted Engine VM Duplication Mechanism

Orion soft has developed a mechanism that ensures fault tolerance of the Hosted Engine management server. It allows you to deploy a second BM of the Hosted Engine management server and transfer all settings from the original to it in case of failure of the main management server. The operating mode of the two management servers is Active-Passive. This approach ensures continuity of operation and minimizes downtime without the ability to manage the configuration of the virtualization environment.

Author's note:

The root of the problem lies in the progenitor — the open-source product oVirt, where the management of the virtualization platform is entirely tied to the Hosted Engine management server. No management server means that the ability to manage and monitor the entire virtualization system is lost. Graphical management of hypervisors, like VMware Host Client, is also not provided. And any changes made outside the operation of the management server in the future, after the Hosted Engine is restored to working order, lead to problems with the consistency of its internal database.

What does it look like? We prepare virtualization nodes for the HE HA (Hosted Engine High Availability) functionality by installing an additional package from the repository. Then the second Hosted Engine2 Backup virtual machine is installed with an empty configuration. An automated method is used to set up a backup copy of the configuration of the main HE server and send this backup copy to the Hosted Engine2 Backup VM. The process occurs every 10 minutes, the last copy is stored.

If the primary management server is not working (network availability is checked using Keepalived), special mechanisms initiate recovery from a backup copy that was copied from the primary management server to the Hosted Engine2 Backup VM.

After restoring the VM configuration, Hosted Engine2 Backup becomes the primary management server.

When the primary Hosted Engine Master server instance is restored to health, the failover process is initiated back to the primary management server. This happens automatically, but if desired, the administrator can do it manually using the cli command.

It is important to keep in mind that during the process of moving back to the main management server, it is better to keep activity (making changes to platform configurations, for example) to a minimum.

Please remember: the backup VM of the management server is not intended for full-fledged operation, it only serves as a temporary solution while the main server is unavailable.

Now let's move on to illustration:)

The process of deploying the HA mechanism for the Hosted Engine management server is as follows:

Step 1. Launch the installation of the required HE-HA component:

Step 2. Prepare the HE image on the second zvirt node:

Step 3. Fill in the configuration file required to install the functionality:

Step 4. Launch the HE HA installation process:

After the installation is complete, we get the second Hosted Engine management server in Backup mode. This is what the finished VM looks like with the FQDN of the main server:

The switching time to the backup copy of the management server is from 5 to 10 minutes. The reverse switching works on the same principle. When the main server is available, the switching process is automatically started.

For deeper tuning of this mechanism, corresponding configuration files are provided, but there are no detailed descriptions in the documentation yet, we are waiting for updates.

Live migration of VMs between clusters

In the new version, we can migrate VMs between nodes of different clusters within one logical data center via the web interface. The migration mechanism includes checking the node for available RAM, processor cores, processor compatibility, and network settings.

It looks like this. The initial location of the VM centos01 is Cluster01 and the node is zv4202.demo.local:

Create a migration task. The vendor has reworked the migration menu. It is now possible to select another cluster from the interface. The menu now looks like this:

The migration process is still monitored in the general menu “Tasks”. We follow the process of moving to the neighboring cluster:

As a result, we get a VM that has moved to a node in a neighboring cluster:

It is important to note that a common logical data center for clusters implies the presence of a common storage between nodes of not only one cluster, but also nodes of different clusters. Thus, all nodes have access to a single storage, where the VM for migration is located. Of course, we would like to see migration between clusters without a common storage, but this feature is already a step forward compared to previous versions of the product.

Consolidated images

The problem with large infrastructures is that with a large number of connected storage domains (storage systems in zVirt terminology), it becomes difficult to track the created VM Snapshots. To solve this problem, the Snapshots subsection was added (Storage → Snapshots). If we go to it, the interface starts displaying a list of virtual machine snapshots with the ability to filter the result by the desired parameters.

How does this look in the GUI? Let's take two VMs with prepared snapshots.

First VM:

Second VM:

Fig. Updated main menu

Fig. Updated main menu

When you go to the desired section, you can access information on all VM snapshots with the ability to filter the output by the desired parameters:

It would seem that such a small function and so many benefits from working with it.

A little about other changes in the release

Among the interesting features of the release, we would like to note the support of the software-defined storage system Gluster. Despite the fact that the Open Source solution oVirt has support for GlusterFS, Orion soft in release 4.2 worked on checking the compatibility of its solution with this SDS.

True, from our experience this solution functioned in zVirt version 4. We tested clusters of 4 nodes in a small installation, where Gluster storage was assembled on 3, and the 4th host acted as a computing one.

A minimum of three nodes is required to deploy a fully functional SDS solution, and scaling to 6, 9, or 12 nodes is also supported.

To achieve the best performance of the disk subsystem, we recommend using SSD and dedicated network interfaces with a speed of at least 10 Gbps. Hybrid configurations combining SSD and HDD disks are also supported. In the case of HDD, it is better to configure a small SSD as an LVM caching volume.

This implementation of SDS includes support for Virtual Data Optimizer (VDO). This Linux module can compress and deduplicate data at the block device level. It can save disk space, which is especially useful in environments with large amounts of storage.

Of course, the GlusterFS solution is not an advanced technology, has a low level of performance and is not intended for large and loaded information systems. But if the issue of purchasing an external storage system is acute and the IT infrastructure is not that large (within 12 nodes), then GlusterFS will suit you perfectly. In addition, the solution works in a hyperconvergent version, combining both the computing role and the role of a storage node on the same physical nodes. It is also worth noting from the characteristics that a ready-made cluster can be launched on 3 physical servers.

To operate a loaded system, a distributed storage like Ceph is required, but there is no native support for this solution yet.

In conclusion

The current version has moved even further away from its Open-Source edition. Despite the fact that we had little time for testing, the new version leaves a pleasant impression. The developers have worked hard, improving the functionality, and in some places thinking through the details. The product is developing, which, of course, cannot but please. I would like to wish the vendor good luck and keep up the pace!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *