Now that we at TERENA have a new and shiny setup of VMware VI3, I had to migrate several of our Debian 3.1 Sarge servers. Some of them had custom kernels, because of specific hardware.
This virtualisation process (P2V, or Physical to Virtual) is properly supported for the Windows platform using the VMware Converter software. This works very nice, and supports hot cloning. However when your Physical Machine (PM) is running Linux, hot cloning is not possible. The way to go is to use the VMWare Converter Boot CD. This requires rebooting the PM with a bootable CDROM, and from the PE-boot environment on that CDROM the dead corps is cloned to a VM.
The downside is of course that the machine has to be brought down for a substantial amount of time. Also, if your PM uses specific I/O controllers and/or network cards, the boot CDROM will need to be customized to hold the right drivers. This has to be tested too, so it might even take several times of downtime.
By doing things manually, you can avoid almost all of the downtime. I P2V-ed 3 systems, all over 100Gb, each with less than 20 minutes downtime.
Also, because you ‘warm’ clone a live system, you don’t need to worry about disk and network drivers. Another benefit compared to cold cloning is that you can test things first on a dummy VM without any downtime at all.
This article documents all the steps needed. It assumes your old PM is running Debian Sarge with one of the 2.6.8 kernels, uses GRUB as bootloader, has rsync installed, and can be reached by the root user via SSH.
The procedure basically comes down to cloning a live system to a dead VM, stopping all services, do a final syncronisation, and revive the dead VM.
Step-by-step guide:
- Create a VM with at least as much diskspace as the PM.
- Configure the VM to boot an ISO image of Ubuntu 6.06 LTS Desktop Edition
- Open up a shell, su to root, and partition the disk. If you stick to exactly the same partition scheme, you don’t have to change the fstab file. You can change the size without any problem too. If you decide to change the partition scheme, be sure to not split directories that contain hard links. For example, if your PM has just one big /-partition, and you decide that the new VM will have separate / and /usr partitions, this will not work because hard links cannot be created across partitions.
- Once all partitions are created, make filesystems on them (don’t forget swap), and mount them in the correct order under a temporary directory, let’s say /root/oldbox . Create root’s dir /root/oldbox/root and in there create a file /root/oldbox/root/excluded that contains:
/proc/*
/sys/*
/dev/*
/mnt/*
/tmp/*
/root/excluded
If you changed the partition scheme, you should put /etc/fstab here too, and manually put the correct one in place. - cd into /root/oldbox/root and rsync everything from the PM into it:
rsync -avH --numeric-ids --delete \
--exclude-from=root/excluded IP_of_PM:/ .
This will take a while, depending on the amount of data your PM has. - Once everything is copied over, the time has come to shutdown all data-writing services on your PM (mail, databases, etc). Ideally only the SSH daemon should run. This means that most of your services will be offline from here. The good thing is that this period can be really small.
- Once you made sure that nothing runs except SSH on your PM, rerun the rsync command. This time it will be quick, as only the diffs will need to be transferred. This usually involves open files from databases, logfiles, etc.
- Now create the initial device nodes needed for the kernel:
cd /root/oldbox
mknod -m 660 /root/oldbox/dev/console c 5 1
mknod -m 660 /root/oldbox/dev/null c 1 3
- mount the proc and dev filesystem and chroot into the /root/oldbox dir:
mount -t proc none /root/oldbox/proc
mount -o bind /dev /root/oldbox/dev
chroot /root/oldbox
- If we now would reboot, the old initrd image would not recognize the proper modules to (unless your PM accidentally had a LSI controller). We need to add the drivers. To do this, add this to the file /etc/mkinitrd/modules (assuming your PM runs one of the 2.6.8 debian kernels):
mptscsih
mptbase
And regenerate the initrd image (depending on your specific kernel version):
mkinitrd -o /boot/initrd.img-2.6.8-4-686-smp 2.6.8-4-686-smp
(Since my PM had an older, custom kernel (2.6.8-3-686-smp), I installed a newer one, plus udevd. During installation a new initrd image is generated automatically:
apt-get install udev kernel-image-2.6.8-4-686-smp) - Now we need to regenerate the bootblock. Run the grub command to enter the grub shell and see where it finds the stage1 file:
find /boot/grub/stage1
It should come up with something like (hd0,1). Use this as argument for the next command:root (hd0,1)
Then use the hd part only for the next command:
setup (hd0)
Then issue quit to leave the grub shell.
By now your system is ready to boot. Leave the chroot environment (exit), unmount the dev and proc filesystem, then unmount all filesystems under /root/oldbox, issue sync, and then halt.
To avoid network clash, unplug the network cable of the PM, or shut it down.
Now you can power on your VM, it should boot a virtualized copy of your Debian system 🙂
TODO – Some things to do afterwards (vmtools, etc)
Hello , thank you for this tutorial, I have to do a similar P2V but my physical machine is on software RAID 1, what tweaks will I need to do ?
Thanks
Paolo
If you don’t want to use software RAID1 anymore in the VM (which would seem logical), just create simple partitions on the destination VM, and make sure to change /etc/fstab accordingly (before installing bootloader and/or kernels).
I’ve followed this guide twice, for two different machines (one Red Hat 7.3, one Red Hat 9) Both times, after fixing the kernel and GRUB and trying to boot the VM, I get a message about the superblock could not be read or does not describe a correct ext2 filesystem. If anyone knows about this and can add some info to this guide, that would be much appreciated!
Thanks a bunch for this manual.
However, you should cd into /root/oldbox instead of /root/oldbox/root and rsync everything from the PM, because otherwise your PM ends up in the directory of your root user.
When P2V’ing a 2.4 kernel Sarge system, please issue an apt-get update and apt-get install udev kernel-2.6-686. this will fix a lot of shit, like the almost empty /dev causing the /dev/sda* to bork because of missing devices in /dev. So update that kernel while still in de chroot to a 2.6 and udev and you’re good to go.
Thanks again!
Excellent guide.
I skipped a couple of the first steps and used a bare metal restore in the vm (mkcdrec – an excellent tool) and also used a knoppix cd rather than ubuntu.
The target was a vm under Parallels rather than vmware so all in all I was rather surprised it all worked.
Many thanks for your help.
Hello,
try following http://virtualaleph.blogspot.com/2007/05/virtualize-linux-server-with-vmware.html .
A lot of people, other than me, have success in cloning linux with that procedure
I would like to exchange link of our procedures: I’ll link your if you like it!
Regards
\aleph0
http://virtualaleph.blogspot.com
In my case grub was not updated properly to reflect the new architecture. This bit me when the kernel on Debian was updated with a security update and the boot reverted to the old hardware type.
Additional things I had to do to get the clone to boot included.
* Changing #kdev in the menu.1st file to reflect the correct disk type.
* Recheck the ‘device map’ : /usr/sbin/grub-install –recheck /dev/sda (or whatever)
* Run /usr/sbin/update-grub to check that device.map and menu.1st are properly regenerated
I know I’m a little late to the party here, but I’ve used this guide to successfully convert Debian 3.0 (sarge) and Red Hat 8.0 to an ESXi VM. I have a few notes:
* If you are not installing a 2.6 kernel w/udev, you NEED to copy /dev . I had to do this for 2.4 kernels.
* If you have a server (probably with a 2.4 kernel) you can’t upgrade that mounts to hda instead of sda, when you chroot it you need to edit /etc/mtab to change to the existing mounts or you won’t be able to update your initrd or install grub.
* Some old distros cannot support the new features of e2fsprogs (This might be your problem @John Oliver). I experienced this with Red Hat 8. I was doing everything from the newest sysresccd and when Red Hat would go to mount the servers it would say it had “unsupported features”. I had to actually use and old Red Hat 8.0 install cd in rescue mode to format the partitions, and do the rest from the sysreccd.
* For Red Hat servers, you made need to change your modules similarly to Debian, see this article – http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1002402
* If you are copying from an old system with software RAID, the system will freak out when you boot that you can’t find RAID devices. I experienced this on my Red Hat 8.0 machine. mdadm was not installed as a service (may be a custom kernel or something), so I finally figured out that the old RAID information was stored in /etc/raidtab. Removed that and all was well.
Thanks for this guide and hope my notes can help someone, even it it’s 4 years after the guide was published!