Minimal downtime migration of KVM virtual machines between separate Proxmox VE clusters

Some time ago, it was necessary to solve the problem of migrating KVM virtual machines from one Proxmox VE cluster to another with minimal downtime. There is no such possibility in PVE out of the box, but as it turned out, online migration of virtual machines between clusters can be performed using KVM tools. I will describe the transfer procedure in detail in this guide.

Important Notes

  1. Procedure tested for Proxmox VE 6.x

  2. Cluster servers between which migration is performed must be configured for SSH login without a password

Conventions

  • pve-01 – the server from which we will migrate

  • pve-02 – the server to which we will perform migrations

  • 100 – initial ID of the virtual machine

  • 120 – Virtual machine ID after migration

  • pc-i440fx-2.11 – the chipset of the virtual machine, in your case it may differ, below I will show how to determine

  • 192.168.0.3 – IP address of the server to which we will migrate the virtual machine

Procedure

  1. SSH into both servers

  2. On server pve-01 find the chipset that is emulated for our virtual machine. In our case it will be pc-i440fx-2.11

    cat << EOF | socat STDIO UNIX:/run/qemu-server/$SRCID.qmp | grep --color current
    { "execute": "qmp_capabilities" }
    { "execute": "query-commands" }
    { "execute": "query-machines" }
    EOF
  3. For convenience, set environment variables on both servers

    SRCID=100
    DSTID=120
    CHIPSET=pc-i440fx-2.11
    DSTIP=192.168.0.3
    DSTPORT=60000
  4. Get it on the server pve-01 virtual machine start command

    ps ax | grep $SRCID
  5. Copy from pve-01 on pve-02 virtual machine configuration file. After completing this step, the PVE web interface will display the virtual machine configuration with the ID $DSTID

    scp /etc/pve/local/qemu-server/$SRCID.conf $DSTIP:/etc/pve/local/qemu-server/$DSTID.conf
  6. In the PVE server interface pve-02 from the virtual machine configuration $DSTID Remove all Hard Disks and re-add the same number of hard disks of the same size.

  7. In the server console pve-02 start the virtual machine $DSTID pending migration. To do this, we modify the launch string obtained in step 4:

    1. $SRCID replaced by $DSTID

    2. Remove from string ,x509 if there

    3. Make sure the launch line contains -machine type=$CHIPSET obtained in step 2

    4. Add -incoming tcp:$DSTIP:$DSTPORT -S

    /usr/bin/kvm -id $DSTID <остальные параметры> -incoming tcp:$DSTIP:$DSTPORT -S
  8. Let’s run the migration

    qm monitor $SRCID
    # Опционально можно ограничить скорость передачи данных
    qm> migrate_set_speed 100M
    qm> migrate -b tcp:$DSTIP:$DSTPORT
    
    # Прогресс можно наблюдать командой
    qm> info migrate
  9. Let’s run qm monitor on server pve-02to track progress. When data copying is completed, the source VM will enter the state VM status: paused (postmigrate)

    qm monitor $DSTID
    qm> info status
    
    VM status: paused (postmigrate)
  10. IN qm monitor on server pve-02 start the migrated virtual machine

    qm> c
  11. On server pve-01 stop the original virtual machine

    qm stop $SRCID
  12. We check that after the migration everything works as expected and delete the original virtual machine

    qm destroy $SRCID

I hope this guide can save time and nerves for engineers facing a similar task.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *