Slow virtualization on x86. A little attempt to figure it out. Part 3: Hyper-V

For the league of laziness. Some kind of gibberish about something that is not necessary, because anyway, for normal people, all applications have been in the clouds on microservices for a long time, and they work great.

Slow virtualization on x86. A little attempt to figure it out. Part 1: General overview
Slow virtualization on x86. A little attempt to figure it out. Part 2: ESXi by Broadcom

Part 3. What follows from this, and how does a normal person’s scheduler work in Hyper-V? There will be nothing new here for those who opened the documentation about root partition

If in ESXi by Broadcom there is a separate operating system, which has its own vsish, esxtop, and which runs processes (of which the most important are vpxa and hostd ) and worlds, then MS Hyper-V at first glance looks like a “regular operating system”. But MS wouldn’t have been themselves if they hadn’t stepped up here too, creating a hypervisor that wasn’t the first or the second type. Although AWS writes that this is “type 1 hypervisor»
As MS themselves write in the Architecture section – Hyper-V architecture – Hyper-V features a Type 1 hypervisor-based architecture.

MS Hyper-V architecture.
To begin with, somewhere at the very beginning the hypervisor itself is loaded – Hyper-V. The hypervisor creates a root (or parent) partition where the OS boots, and the OS manages the remaining virtual machines using:
Virtualization Service Provider (VSP) – runs on the root partition
Virtualization Service Consumer (VSC) – works in other sections
VMBus – communication bus
Hypercall – interaction interface guest OS and host.

Scheduler (CPU) in MS Hyper-V
Can be one of 3 to choose from:
The classic scheduler. Used by default up to and including Windows Server 2016.
The core scheduler. Used by default starting in Windows Server 2019. Provides greater isolation and more.
The root scheduler. Used since Windows 10.1803

Relationship between the scheduler and the host operating system.
Since the main operating system (host) lives in a separate partition, you can turn the knobs called Minroot, separating “what the OS itself does on the kernels” from “what virtual machines do there. The setup is described in the section Hyper-V Host CPU Resource Management.

Relationship with the timer.
The system uses 4 different timers at once, of which the main one and the tightly screwed one are at 100nS, described in the section Hypervisor Specification\Timers
This results in some pain with RTC applications, mostly voice ones. That is, unscrewing Minroot and playing (and losing) an unequal battle with a timer of 10nS instead of 100nS, which is described nowhere. And hidden in the depths is HPET – High Precision Event Timer, it makes it very, very painful, for example in cloud pbx. If you eat, you will find out.

Allocation of resources.
Made through the allocation of a group of processors – Virtual Machine Resource Controls. However, control of such a function is available only through Hyper-V Host Compute Service. Link to Microsoft Virtualization team's blog from the article Managing CPU Groups leads to nowhere, instead of a blog.
ESXi has a relatively similar, but done differently feature – Resource Pools.

In addition, you can separately tweak the VM settings – VM cap. There is a separate section about this in Virtual Machine Resource Controls\ Setting CPU Caps on Individual VMs
In other points, it must be said that the settings for service classes (Classes of Service) and VM described in the article Managing CPU resources for Hyper-V virtual machines – wrote Norcomans, clearly not from our galaxy, and their brains must be penetrated with rays. Perhaps this is more convenient for Azure and its local variants, but it is very poorly described.

All of the above leads to the need to read everything about timers, such as
Windows system timer granularity
Clocks, Timers and Virtualization
Timekeeping Virtualization for X86-Based Architectures
The remaining settings are scattered in different places in the documentation, and are generally described almost nowhere.

Networks.
Dive within the framework of an initially short article into the depths of Virtual Machine Multiple Queues (VMMQ), Dynamic VMMQ and the problem Linux Network Performance I don't want to anymore.

Total.
Hyper-V slows down just like ESXi, but it hurts differently, is managed differently, and a lot of articles and notes were lost along with the cleanup of the old technet. For a more detailed comparison and description, the documentation is not enough; it is necessary to deploy a test bench. Analogue Hands-on lab exists only for Azure – Azure CLX.
In order to understand how Hyper-V is better or worse than ESXi in your conditions, you need full pilot testing with your loads.

Bonuses.
I found it by accident. Archives Virtualization Team Blog
Interview with 'Mr Hyper V' Ben Armstrong
Ignite: In Chicago and online: November 18–22, 2024

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *