Forwarding a video card to a virtual machine

1. Introduction

Two different systems (win + linux) on the same hardware base is a reality. There is nothing new or innovative in this (at this point in time), but if maximum guest system performance is required, then forwarding real devices to the virtual machine is indispensable. Forwarding network cards, usb controllers (etc) does not bring extraordinary features, but an attempt to “share” the resources of the video card and processor may well bring a number of problems.

So, why, strictly speaking, to fence systems with full-functional use of GPU and CPU resources? The simplest and most obvious answer is games (a widely known fact – if not most, then very many, are written under Windows OS). Another option is a full-fledged workplace with the ability to run demanding applications (for example, CAD software), a quick backup (copying a VM file is much easier than creating a complete copy of the HDD / SSD) and the option of full control of the network traffic of the guest system.

2. Hardware

CPU: Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz

Motherboard: ASRock Z390 Phantom Gaming 4S

Video card 0 (for forwarding to VM): Lexa PRO [Radeon 540/540X/550/550X / RX 540X/550/550X]

Video card 1 (for host system): Park [Mobility Radeon HD 5430]

USB controller (for forwarding to the VM and subsequent connection of peripheral devices, such as keyboards): VIA Technologies, Inc. VL805 USB 3.0 Host Controller

3. OS settings

AlmaLinux 8 OS was chosen as the host system (installation option “Server with GUI”). I used CentOS 7/8 for a long time, so I think the choice is obvious here.

The first thing to do is to limit the use of the video card intended for use in the VM to the host system. To do this, we use a number of commands and settings:
1) using the command “lspci -nn | grep RX» Get the unique identifiers of the video card. Since the RX-series video card, then, accordingly, we are looking for in the output lspci (the utility is installed using the command “dnf install pciutils”) for these two characters. We get the output something like this (the highlighted substrings are the desired device identifiers) –

«02:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Lexa PRO [Radeon 540/540X/550/550X / RX 540X/550/550X] [1002:699f] (rev c7)

02:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X] [1002:aae0]“, where 1002:699f is the ID of the VGA controller, and 1002:aae0 – Built-in audio card. We also remember the identifiers “02:00.0” and “02:00.1”;

2) adding to the command “lspci -nn” key “k“(“lspci -nnk”) find the device “1002:699f» and remember the value «Kernel driver in use». In my case it is “amdgpu»;

3) in the file “/etc/default/grub» find the line starting with «GRUB_CMDLINE_LINUX“, and add after “quiet» values ​​«intel_iommu=on iommu=on rd.driver.pre=pci-stub pci-stub.ids=1002:67ff,1002:aae0“, where “intel_iommu / iommu” – parameters responsible for supporting IOMMU technology (technology for the interaction of virtual machines with real equipment), “rd.driver.pre=pci-stub” – an indication to force the dummy pci-sub driver to be loaded first, “pci-stub.ids» – enumeration of devices for which a dummy driver must be used when loading the kernel (i.e. devices are isolated for further use in virtual machines). If the host machine uses an AMD CPU, then change “intel_iommu” to “amd_iommu»;

4) to the file “/etc/modprobe.d/local.conf» add lines «blacklist amdgpu” and “options pci-stub ids=1002:67ff,1002:aae0“, where “blacklist amdgpu” is an explicit indication to disable the use of the AMD graphics driver, and “options pci-stub ids=1002:67ff,1002:aae0” – an explicit indication of the use of a dummy driver for the corresponding device identifiers;

5) execute the command “grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg» (i.e. recreate the GRUB bootloader configuration file). If this is not about EFI boot, then the command looks like this – “grub2-mkconfig -o /boot/grub2/grub.cfg»;

6) execute the command “dracut --regenerate-all --force» to recreate the initramfs image (initial RAM disk image, a file with a file system image loaded into RAM) used when booting Linux as the original root file system;

7) reboot the virtualization host.

The purpose of these settings is to limit the use of certain devices during boot. For example, before setting the parameters, the output of the “lspci -v” command for the VGA controller will contain the substring “Kernel driver in use: amdgpu“, and after reboot – “Kernel driver in use: pci-stub“. At the start of the VM with Windows (and after device forwarding) – “Kernel driver in use: vfio-pci” (which can be seen after starting the created VM). An important point is that the video card used for the host system must use drivers that are different from those used for the forwarded video card, for example, in my case, “Radeon HD 5430” is used, the driver for which is “radeon” (in the output “lspci -v» – «Kernel driver in use: radeon“).

4. Installing software for virtualization

one) “dnf install epel-release“.

2) “dnf install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-viewer virt-manager seabios numactl perf cockpit cockpit-machines xauth virt-top libguestfs-tools“.

3) “dnf install @virt“.

4) optional. “dnf install perl” (Perl – one love).

5. VM settings QEMU-KVM via virt-manager

First, download the iso image of Windows 10 and Virtio drivers from RedHat (also in the form of an iso image).

During the initial installation, always check the box “Customize configuration before install“.

1) Specify the iso image of the operating system to be installed (for example, Windows 10). We also add an additional device of the “CD-ROM” type and mount it in the add. device iso image with Virtio drivers.

2) For the virtual HDD (where the OS is planned to be installed), set: “Bus type = Virtio“. Virtual Disk Type – qcow2 or raw.

3) For more efficient work, we place the main virtual disk for the VM on an SSD.

4) Network card model – virtio.

5) Overview: chipset = “Q35”, firmware = “UEFI x86_64: /usr/share/OVMF/OVMF_CODE.secboot.fd“.

6) OS Information: Operation System = “Microsoft Windows 10”.

7) CPU (the corresponding blocks in XML should look like this if we are talking about a similar hardware configuration):

Expand

<cpu mode="host-passthrough" check="none">

<topology sockets="1" dies="1" cores="4" threads="1"/>

<cache mode="passthrough"/>

<feature policy="disable" name="hypervisor"/>

</cpu>

<features>

<acpi/>

<apic/>

<hyperv>

<relaxed state="off"/>

<vapic state="off"/>

<spinlocks state="off"/>

<vendor_id state="on" value="1234567890"/>

</hyperv>

<kvm>

<hidden state="on"/>

</kvm>

<smm state="on"/>

</features>

8) Remove from the VM configuration: “Tablet”, “Display VNC”, “Channel qemu-ga”, “Video VGA”.

9) Add (through “Add Hardware → PCI Host Device“”) the desired devices (VGA controller, audio controller built into the video card and a separate USB controller), focusing on the allocated identifier “02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Lexa PRO [Radeon 540/540X/550/550X / RX 540X/550/550X] (rev c7)” (example output of “lspci”).

10) We connect the monitor to the forwarded video card, and the “mouse” with the keyboard to the forwarded USB controller.

11) We start the installation process (“Start installation“). During the installation process, we point the installer to the Virtio image as a source driver for the HDD.

12) After installation, go to the task manager and for unknown devices, specify the disk with Virtio as the source driver. We also install video card drivers.

If everything is done correctly, then in the Windows task manager you will see a real video card and 4 CPU cores with shared processor resources (L1 + L2 + L3 cache).

Similar Posts

Leave a Reply Cancel reply