So not only have I managed to unofficially create a 'linux group' with the Wintel guy here at work (and for a unix shop, that's big news) but I'm now also (apparently) the Integrity VM dude - that is, HP/UX's proprietary virtual machine solution. Something we've never used, let alone deployed before. This came about rather abruptly when one of the projects found they were a host short during the discovery of a flaw in their design. They've asked us to completely engineer a VM environment while trying to maintain our current project dates. Yeah, that second part - not gonna happen. Anyway, after an entire day of setbacks...
Historically, the filesystem on HP/UX which houses the kernel configuration and binary files was required to be HFS - HP's legacy Hi Performance FileSystem. That has changed in 11iv3 (11.31) but by default, the system is still created with an HFS /stand partition. Unfortunately Integrity VM won't install on any box which has even a single HFS filesystem.
So this week I'm deploying an Integrity VM host and two guests on an IA64 Blade chassis.
Firstly, there are three ways to convert a filesystem from HFS to VXFS with varying degrees of downtime, length, and accuracy. Here's the FAST AND DANGEROUS way which worked like a chiz-amp!
# grep -i hfs /etc/fstab
# umount /stand
# fsck -F hfs /dev/vg00/lvol1
# fbackup -f /opt/stand_fbackup.090915.egh -i /stand
# vxfsconvert -y /dev/vg00/lvol1
# vi /etc/fstab
# fsck -F vxfs -y -o full /dev/vg00/lvol1
# mount -o rw,suid,delaylog -F vxfs /dev/vg00/lvol1 /stand
# fsadm -ed /stand
# frecover -r -y -f /opt/stand_fbackup.090915.egh
** IMPORTANT **
Don't forget to run the restore before you reboot, else you've lost your kernel and there's no easy way to mount up your volumes like in Solaris. Yeah, I had to re-ignite.
Secondly, 11iv3 "unlocked" the hyperthreading on the Itanium chips, but Integrity VM will not run with it enabled. Another kernel parameter change, another reboot. And for those of you not in the know - HP/UX servers do not reboot quickly.
Now for the nuts & bolts. We have one of those cutsey point-and-click SAS array SANs; I previously created two mirrored volumes - one for each of the two VMs (raw SAN devices can be presented). I have to first create the first virtual switch, start it (-b) and create the first VM:
# hpvmnet -c -S switch1 -n 2
# hpvmnet -b -S swtich1
# hpvmnet -v
Version B.04.10.00Name Number State Mode NamePPA MAC Address IPv4 Address ======== ====== ======= ========= ======== ============== =============== localnet 1 Up Shared N/A N/A switch1 2 Up Shared lan2 [IP ADDRESS]
# hpvmcreate -P hostname -a disk:scsi::disk:/dev/rdisk/disk9 -a network:lan::vswitch:switch1 -B auto -O HPUX:11.31 -c 4 -r 10G
# hpvmstatus -v
Version B.04.10.00Virtual Machine Name VM # OS Type State #VCPUs #Devs #Nets Memory Runsysid ==================== ===== ======= ========= ====== ===== ===== ======= ======== hostname 1 HPUX Off 4 1 0 10 GB 0
Now all we have to do is fire it up, and gain access to it through a virtual management processor (hpvmconsole -P hostname) in order to tell it where to boot from to install an operating system. In this example, we're going to boot it from the same ignite image from which we installed the host O/S. But first we need to create a dbprofile followed by the boot command. Sadly, in the virtual console you can't turn off xchar :(
dbprofile -dn ignite -sip ignite_server_ip -m netmask -gip gateway -cip client_ip -b /opt/ignite/boot/nbp.efi
lanboot select -dn ignite
Let me try to explain to you just how freaking fast this performs as compared to a standard Ignite. I was downright giddy. I've never seen anything install this quickly. My Itanium box at home? THREE HOURS from DVD. A standard Ignite? ONE HOUR. This took half that.
So it took me 12-hours to grasp the technology and deploy it, but I can now clone this one as many times as I need, anywhere, configuring CPU/memory on-the-fly (hpvmclone). I simply boot it, run a set_parms against it, and its its own machine.
I don't need a GUI which only runs on Windows.