When trying to hot migrate a VS on a cloud-boot compute resource, the migration fails with the error "error: internal error Attempt to migrate guest to the same host"
CloudBoot KVM compute resources
Before the migration starts, virsh check the UUID of the system by executing virsh sysinfo. Also, it executes the same command on the host that the VS will be migrated to. This is to avoid migrating to the same server in case of a cluster of compute resources.
1. Check what virsh sysinfo returns for your compute resources:
[email@example.com ~]# dmidecode -s system-uuid
[firstname.lastname@example.org ~]# dmidecode -s system-uuid
[email@example.com ~]# dmidecode -s system-uuid
[firstname.lastname@example.org ~]# dmidecode -s system-uuid
2. As we can see, they all have the same UUID, so in this case this was Dell all in one 4x servers in one enclosure.
In a non-cloud-boot environment, we could simply specify the UUID on /etc/libvirt/libvirtd.conf, restart libvirtd and it would work. Another option is to apply the wrapper only (see how to download the wrapper in step 8), but as this is a cloud-boot environment, it gets more complicated, as virsh looks for dmidecode in /usr/sbin/dmidecode, but /usr in our cloud-boot images is a cramfs filesystem and mounts read only.
The solution is to recreate the initrd with the wrapper and then specify the UUID in /etc/libvirt/libvirtd.conf.
1. Go to the images directory:
2. Backup the initrd:
cp initrd.img initrd.img.backup
3. Create the directory where we will unpack the original initrd:
mkdir -v /tftpboot/images/centos6/ramdisk-kvm/initrd.unpacked && cd /tftpboot/images/centos6/ramdisk-
4. Decompress the intrd:
zcat ../initrd.img | cpio -id
5. Mount the usr directory (it will be readonly):
mount -o loop usr.cramfs usr
6. Copy the info of the usr to another location so we can edit and apply the changes (I assume you are in the /tftpboot/images/centos6/ramdisk-kvm/initrd.unpacked directory):
rsync -av usr/ /tmp/cramfs-usr/
7. Copy the real binary "demidecode":
cp usr/sbin/dmidecode sbin/dmidecode.orig
8. Now download the wrapper and copy it to the temporary directory and give permissions to execute:
cp demidecode (This should be the wrapper) /tmp/cramfs-usr/sbin/dmidecode
chmod +x /tmp/cramfs-usr/sbin/dmidecode
9. Again, from within the /tftpboot/images/centos6/ramdisk-kvm/initrd.unpacked directory umount the usr.cramfs image and delete it:
rm -rf usr.cramfs
10. Create the usr.cramfs with the wrapper applied, this will create a ro cramfs image:
mkfs.cramfs /tmp/cramfs-usr/ usr.cramfs
11. Recreate the initrd and clean the dirs created:
find . | cpio -H newc --quiet -o | gzip -9 > ../initrd.img.new && cd .. && mv -fv initrd.img.new initrd.img
12. Now clean behind you :)
rm -rf /tmp/cramfs-usr/
13. Add the following custom config to all compute resources. Create a UUID with uuidgen so that the UUIDs are different:
##uuid for libvirtd.conf##
echo 'host_uuid = "BA7CC229-D375-410F-88E9-0A918DAA7120" ' >> /etc/libvirt/libvirtd.conf
service libvirtd restart
14. Reboot the compute resources in order to load the new initrd and Custom config.
15. Check that the new UUID is being used by libvirtd.
As you can see, the UUID in libvirtd.conf is the same as the one we get when doing virsh sysinfo:
[email@example.com ~]# tail /etc/libvirt/libvirtd.conf
# sending any keepalive messages.
#keepalive_interval = 5
#keepalive_count = 5
# If set to 1, libvirtd will refuse to talk to clients that do not
# support keepalive protocol. Defaults to 0.
#keepalive_required = 1
host_uuid = "BA7CC229-D375-410F-88E9-0A918DAA7120"
[firstname.lastname@example.org ~]# virsh sysinfo | grep uuid
Same UUIDS on the compute resources. Before the migration starts, virsh checks the UUID of the system by executing virsh sysinfo. Also, it executes the same command on the host that the VS will be migrated to. This is to avoid migrating to the same server in case of a cluster of compute resources.