Sun ZFS Storage Appliance Simulator on OVM or KVM
For those familiar with the excellent ZFS file system and the Sun(now Oracle) storage products built on ZFS, the ZFS storage appliance interface is very easy to use; and definitely worth considering when looking at when purchasing a SAN.
Oracle has a simulator virtual machine to try out the interface. Unfortunately it only runs on Virtualbox which is fine for those running Virtualbox on a desktop. If you would like to run it on something more accessible by multiple users(KVM or OVM); the Solaris based image has some issues running.
I recently got the Virtualbox image to run on OVM and subsequently also got it to work on KVM. This is a quick guide how to get the Virtualbox image to run as a qcow2 image on a KVM hypervisor.
Update: Changed to qed format. If you don't have qed, qcow2 worked for me also.
As I understand there was also a vmware image but it disappeared from the Oracle website. I am not sure why Oracle does not publish at least OVM images or make an effort to run the simulator on OVM. Maybe there is a good reason and it's possible that Oracle want to discourage it being used other than on Virtualbox. Really not sure.
Stage the Image:
Download the simulator (link should be on this page somewhere): http://www.oracle.com/us/products/servers-storage/storage/nas/zfs-appliance-software/overview/index.html
From the vbox-2011.1.0.0.1.1.8 folder copy the Sun ZFS Storage 7000-disk1.vmdk file to the KVM host and convert to qcow2.
** Note my first attempt I used qcow and not qcow2 format and had issues starting the image so make sure and convert to qcow2.
# qemu-img convert "Sun ZFS Storage 7000-disk1.vmdk" -O qed SunZFSStorage7000-d1.qed # image: SunZFSStorage7000-d1.qed file format: qed virtual size: 20G (21474836480 bytes) disk size: 1.9G cluster_size: 65536
Create Guest:
Create KVM guest. Use ide disk for SunZFSStorage7000-disk1.qed and specify qed format.
# virsh dumpxml ZfsApp ZfsApp ... </pre> <address> </address> <pre> ... </pre> <address> </address> <pre> ...
Boot new Virtual Machine from the sol-11_1-text-x86.iso. Choose language etc. Select shell when menu appears.
Update ZFS Image:
Now import and mount the ZFS file system. Find the correct device name and update bootenv.rc:
In my case the disk device name for the boot disk is c7d0. I use format to see the disk device name and then find the correct slice for the root partition. You can use "par" and "pr" commands in format to see partitions. In my case we are after /dev/dsk/c7d0s0 and we need to find the correct entry in the /device tree.
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7d0 /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 Specify disk (enter its number): ^C # ls -l /dev/dsk/c7d0s0 lrwxrwxrwx 1 root root 50 May 26 10:35 /dev/dsk/c7d0s0 -> ../../devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a
From above I found the exact device name: /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a
Lets go update the bootenv.rc now.
# zpool import -f system # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT system 19.9G 2.03G 17.8G 10% 1.00x ONLINE - # zfs list | grep root system/ak-nas-2011.04.24.1.0_1-1.8/root 1.26G 14.4G 1.25G legacy # mkdir /a # mount -F zfs system/ak-nas-2011.04.24.1.0_1-1.8/root /a # zfs set readonly=off system/ak-nas-2011.04.24.1.0_1-1.8/root # cp /etc/path_to_inst /a/etc # vi /a/boot/solaris/bootenv.rc ... setprop boot /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a # tail -1 /a/boot/solaris/bootenv.rc setprop boot /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a # bootadm update-archive -R /a updating /a/platform/i86pc/boot_archive updating /a/platform/i86pc/amd64/boot_archive # cd / root@solaris:~/# umount /a root@solaris:~/# zpool export system # init 0
On next boot lets make sure Solaris detect the hardware correct. When you see grub booting edit the kernel boot line and add "-arvs". Then continue booting.
Probably only need to add "-r" but I did "-arvs" to see more and also get into single user mode in case I needed to do more.
Once in single user mode with prompt just reboot.
For me at this point the image was booting into the zfs appliance setup and I could configure it. Also on KVM adding SATA disks worked and the zfs appliance interface could use them for pools.
Update 04.23.14
I recently tried to update a simulator running on KVM but the update did not want to complete. Not sure if too many versions elapsed or something it does not like about being under KVM. Anyhow I tried moving an up to date (ak-2013.06.05.1.1) image on Virtualbox to KVM and that did work.