Category: ZFS

Dec 20

ZFSSA List Snapshots Script

Quick script to illustrate interacting with the ZFS Storage Appliance. In this example I am listing ZFSSA snapshots containing a search string.  Note I edited this for the article without re-testing it still works.

#!/bin/sh

Usage() {
 echo "$1 -u <Appliance user> -h <appliance> -j <project> -p <pool> -s <containsString>"
 exit 1
}

PROG=$0
while getopts u:h:s:j:p flag
do
  case "$flag" in
  p) pool="$OPTARG";;
  j) project="$OPTARG";;
  s) string="$OPTARG";;
  u) user="$OPTARG";;
  h) appliance="$OPTARG";;
  \?) Usage $PROG ;;
  esac
done

[ -z "$pool" -o -z "$project" -o -z "$appliance" -o -z "$user" ] && Usage $PROG

ssh -T $user@$appliance << EOF
script
var MyArguments = {
  pool: '$pool',
  project: '$project',
  string: '$string'
}

function ListSnapshotsbyS (Arg) {
  run('cd /');                          // Make sure we are at root child context level
  run('shares');
  try {
      run('set pool=' + Arg.pool);
  } catch (err) {
      printf ('ERROR: %s\n',err);
      return (err);
  }

  var allSnaps=[];
  try {
      run('select ' + Arg.project + ' snapshots');
      snapshots=list();
      for(i=0; i < snapshots.length; i++) {
          allSnaps.push(snapshots[i]);
      }
        run('done');
  } catch (err) {
      printf ('ERROR: %s\n',err);
            return(err);
  }

  for(i=0; i < allSnaps.length; i++) {
   if (Arg.string !="") {
    var idx=allSnaps[i].indexOf(Arg.string);
    if (idx>0) {
      printf('#%i: %s contained search string %s \n',i,allSnaps[i], Arg.string);
    }
   } else {
      printf('#%i: %s \n',i,allSnaps[i]);
   }
  }
  return(0);
}
ListSnapshotsbyS(MyArguments);
.
EOF

Comments Off on ZFSSA List Snapshots Script
comments

Jun 05

Ubuntu On a ZFS Root File System for Ubuntu 14.04

This is an update post to making an Ubuntu 14.04 (Trusty Tahr) OS work with ZFS root volume. Mostly the instructions remains the same as a previous post so this is a shortened version:

Booting Ubuntu on a ZFS Root File System

Small warning I did this 4 times. It worked the first time but of course I did not document it well the first time and when I tried again I had grub issues.

Step 1:

$ sudo -i
# apt-add-repository --yes ppa:zfs-native/stable

** Don't need grub ppa as per github instructions???

# apt-get update
# apt-get install debootstrap ubuntu-zfs

** Will take quite a while kernel modules compiles!!

# modprobe zfs
# dmesg | grep ZFS:
[ 1327.346821] ZFS: Loaded module v0.6.2-2~trusty, ZFS pool version 5000, ZFS filesystem version 5

Step 2:

# ls /dev/disk/by-id
ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419
ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1
ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2

# fdisk /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419

** Make partitions as follow

# fdisk -l /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419
                                                 Device Boot      Start         End      Blocks   Id  System
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1   *        2048      206847      102400   be  Solaris boot
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2          206848    16777215     8285184   bf  Solaris

Step 3:

# mke2fs -m 0 -L /boot/grub -j /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1
# zpool create -o ashift=9 rpool /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2

# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  7.88G   117K  7.87G     0%  1.00x  ONLINE  -

# zfs create rpool/ROOT
# zfs create rpool/ROOT/ubuntu-1
# zfs umount -a
# zfs set mountpoint=/ rpool/ROOT/ubuntu-1
# zpool export rpool

Step 4:

# zpool import -d /dev/disk/by-id -R /mnt rpool
# mkdir -p /mnt/boot/grub
# mount /dev/disk/by-id/scsi-SATA_disk1-part1 /mnt/boot/grub
# debootstrap trusty /mnt

WTF: ** System seems hung. I see on a different terminal there was a messages system restart required. Weird. If you get this after debootstrap you have to redo Step 1 and Step 4.1 then...Is this because of only 2G RAM?

Step 5:

# cp /etc/hostname /mnt/etc/
# cp /etc/hosts /mnt/etc/
# tail -1 /mnt/etc/fstab
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1  /boot/grub  auto  defaults  0  1

# mount --bind /dev  /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys  /mnt/sys
# chroot /mnt /bin/bash --login

# locale-gen en_US.UTF-8
# apt-get update
# apt-get install ubuntu-minimal software-properties-common

# apt-add-repository --yes ppa:zfs-native/stable
# apt-add-repository --yes ppa:zfs-native/grub < - See below note on this command
# apt-get update
# apt-get install --no-install-recommends linux-image-generic linux-headers-generic
# apt-get install ubuntu-zfs
# apt-get install grub2-common grub-pc

Quick note on grub issues.  During the install I had to create soft links since I could not figure out the grub-probe failures.  From memory I think I created soft links as follow and purged grub2-common grub-pc and re-installed:

/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419 >>>> /dev/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1 >>>> /dev/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2 >>>> /dev/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2

Update 5.6.14:  After I had time to look at it closer I see my grub issues all came from the fact that there is no trusty grub ppa and the apt-add-repository command above is setting up a trusty repo.  Quickest way to fix this is after the apt-add-repository --yes ppa:zfs-native/grub command fix the file manually to use raring. As follow:

# more /etc/apt/sources.list.d/zfs-native-grub-trusty.list
deb http://ppa.launchpad.net/zfs-native/grub/ubuntu raring main

Now ready to continue on.

# apt-get install zfs-initramfs
# apt-get dist-upgrade
# passwd root

Step 6:

# grub-probe /
zfs
# ls /boot/grub/i386-pc/zfs*
/boot/grub/i386-pc/zfs.mod  /boot/grub/i386-pc/zfsinfo.mod

# update-initramfs -c -k all
update-initramfs: Generating /boot/initrd.img-3.13.0-24-generic

# grep "boot=zfs" /boot/grub/grub.cfg
	linux	/ROOT/ubuntu-1@/boot/vmlinuz-3.13.0-24-generic root=ZFS=rpool/ROOT/ubuntu-1 ro  boot=zfs
		linux	/ROOT/ubuntu-1@/boot/vmlinuz-3.13.0-24-generic root=ZFS=rpool/ROOT/ubuntu-1 ro  boot=zfs

# grep zfs /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="boot=zfs"

# update-grub

# grub-install $(readlink -f /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419)
Installation finished. No error reported.

# exit

Step 7:

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool
# reboot

Post First Reboot:
- Made a VB snapshot of course
- apt-get install ubuntu-desktop
** grub issues again so I remade the link again. Later fixed with the grub ppa repo pointing to raring instead.
- create a user
- install VB Guest Additions

TODO:
- Check into grub issue and having to create soft links. Something to do with grub not following soft links.

4
comments

Nov 06

SUN Oracle ZFS Storage Simulator

Previously I wrote an article on getting the ZFS simulator to run on OVM.

Until recently I did not realize that I could upgrade the ZFS simulator on Virtualbox.  I kind of assumed the appliance is checking in the background and showing possible upgrades in the Available Updates page.  None of my simulators or real ZFS appliances was showing new Available updates either.  So here is what I did to update the simulator.  I assume it will work with the OVM ported version also.

If you go to this page https://wikis.oracle.com/display/fishworks/Software+Updates you can see what updates are available for your hardware or simulator.   Then updating is easy.  Just download the zip file you need.  Read the Release Notes.  Then uncompress the file where you are staging.  In the Maintenance > System screen click the plus sign next to Available Updates.  Find the .gz file in the folder structure and upload the image.  Follow the questions.

My simulator running under Virtualbox now shows the below version.  Note that the simulator was too far behind to skip to the latest version so I had to do an extra 2011.04.24 version also.

Comments Off on SUN Oracle ZFS Storage Simulator
comments

Jun 11

Ubuntu root on ZFS upgrading kernels

Update 1:

On a subsequent kernel upgrade to 3.8.0.25 I realized that the zfs modules are getting compiled just fine when the kernel is upgraded.  On Ubuntu 13.04 anyhow.  So all you need to do is fix the grub.cfg file because of the bug mentioned below where "/ROOT/ubuntu-1@" is inserted twice.  Quick fix you could use sed but be careful to verify your temporary file before copying in place:

# sed 's/\/ROOT\/ubuntu-1\/@//g' /boot/grub/grub.cfg > /tmp/grub.cfg
# cp /tmp/grub.cfg /boot/grub/grub.cfg

I left the rest of the initial post below in case I ever need to really boot with a live CD and redo kernel modules from scratch.   Or maybe that would also work for upgrading the zfs modules from git I suppose.

Original post below:

This is a follow on to my running Ubuntu on a ZFS root file system http://blog.ls-al.com/booting-ubuntu-on-a-zfs-root-file-system/ article.  I decided to document how to do a kernel upgrade in this configuration.  It serves two scenarios A) user decided kernel upgrade or B) accidentally upgraded the kernel and now I can't boot any longer.  I wish I could say I wrote this article in response to number 1 but no it was in response to number 2.

In this case the kernel was upgraded from 3.8.0.19 to 3.8.0.23.

Boot a live cd to start.

Install zfs module:

$ sudo -i
# /etc/init.d/lightdm stop
# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install debootstrap ubuntu-zfs

# modprobe zfs
# dmesg | grep ZFS:
ZFS: Loaded module v0.6.1-rc14, ZFS pool version 5000, ZFS filesystem version 5

Import pool, mount and chroot:

# zpool import -d /dev/disk/by-id -R /mnt rpool
# mount /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1 /mnt/boot/grub

# mount --bind /dev /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys /mnt/sys
# chroot /mnt /bin/bash --login

Distribution Upgrade:

# locale-gen en_US.UTF-8
# apt-get update
# apt-get dist-upgrade

** At this point remove any old kernels manually if you want.

Fix grub:

# grub-probe /
 zfs

# ls /boot/grub/i386-pc/zfs*
 /boot/grub/i386-pc/zfs.mod /boot/grub/i386-pc/zfsinfo.mod

# update-initramfs -c -k all

# update-grub

# grep boot=zfs /boot/grub/grub.cfg

** Currently a bug in the Ubuntu 13.04 grub scripts. Look at boot lines there is a duplicated string /ROOT/ubuntu-1/@/ROOT/ubuntu-1@/ in there.

According to https://github.com/zfsonlinux/zfs/issues/1441 it can be fixed with grub-install below.  That did not work for me though I fixed it manually.

# grub-install $(readlink -f /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252)
 Installation finished. No error reported.

Unmount and reboot:

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool

2
comments

May 25

Sun ZFS Storage Appliance Simulator on OVM or KVM

For those familiar with the excellent ZFS file system and the Sun(now Oracle) storage products built on ZFS, the ZFS storage appliance interface is very easy to use; and definitely worth considering when looking at when purchasing a SAN.

Oracle has a simulator virtual machine to try out the interface.  Unfortunately it only runs on Virtualbox which is fine for those running Virtualbox on a desktop.  If you would like to run it on something more accessible by multiple users(KVM or OVM); the Solaris based image has some issues running.

I recently got the Virtualbox image to run on OVM and subsequently also got it to work on KVM.  This is a quick guide how to get the Virtualbox image to run as a qcow2 image on a KVM hypervisor.
Update: Changed to qed format. If you don't have qed, qcow2 worked for me also.

As I understand there was also a vmware image but it disappeared from the Oracle website.  I am not sure why Oracle does not publish at least OVM images or make an effort to run the simulator on OVM.  Maybe there is a good reason and it's possible that Oracle want to discourage it being used other than on Virtualbox.  Really not sure.

Stage the Image:
Download the simulator (link should be on this page somewhere): http://www.oracle.com/us/products/servers-storage/storage/nas/zfs-appliance-software/overview/index.html

From the vbox-2011.1.0.0.1.1.8 folder copy the Sun ZFS Storage 7000-disk1.vmdk file to the KVM host and convert to qcow2.

** Note my first attempt I used qcow and not qcow2 format and had issues starting the image so make sure and convert to qcow2.

# qemu-img convert "Sun ZFS Storage 7000-disk1.vmdk" -O qed SunZFSStorage7000-d1.qed
# image: SunZFSStorage7000-d1.qed
file format: qed
virtual size: 20G (21474836480 bytes)
disk size: 1.9G
cluster_size: 65536

Create Guest:
Create KVM guest. Use ide disk for SunZFSStorage7000-disk1.qed and specify qed format.

# virsh dumpxml ZfsApp

  ZfsApp
...

</pre>
<address> </address>
<pre>
...

</pre>
<address> </address>
<pre>
...

Boot new Virtual Machine from the sol-11_1-text-x86.iso. Choose language etc. Select shell when menu appears.

Update ZFS Image:

Now import and mount the ZFS file system.  Find the correct device name and update bootenv.rc:

In my case the disk device name for the boot disk is c7d0.  I use format to see the disk device name and then find the correct slice for the root partition.  You can use "par" and "pr" commands in format to see partitions.  In my case we are after /dev/dsk/c7d0s0 and we need to find the correct entry in the /device tree.

# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c7d0
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
Specify disk (enter its number): ^C

# ls -l /dev/dsk/c7d0s0
lrwxrwxrwx 1 root root 50 May 26 10:35 /dev/dsk/c7d0s0 -> ../../devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a

From above I found the exact device name: /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a

Lets go update the bootenv.rc now.

# zpool import -f system

# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
system 19.9G 2.03G 17.8G 10% 1.00x ONLINE -

# zfs list | grep root
system/ak-nas-2011.04.24.1.0_1-1.8/root 1.26G 14.4G 1.25G legacy

# mkdir /a
# mount -F zfs system/ak-nas-2011.04.24.1.0_1-1.8/root /a

# zfs set readonly=off system/ak-nas-2011.04.24.1.0_1-1.8/root

# cp /etc/path_to_inst /a/etc
# vi /a/boot/solaris/bootenv.rc
...
setprop boot /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a

# tail -1 /a/boot/solaris/bootenv.rc
setprop boot /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a

# bootadm update-archive -R /a
updating /a/platform/i86pc/boot_archive
updating /a/platform/i86pc/amd64/boot_archive

# cd /
root@solaris:~/# umount /a
root@solaris:~/# zpool export system

# init 0

On next boot lets make sure Solaris detect the hardware correct.  When you see grub booting edit the kernel boot line and add "-arvs".  Then continue booting.

Probably only need to add "-r" but I did "-arvs" to see more and also get into single user mode in case I needed to do more.

Once in single user mode with prompt just reboot.

For me at this point the image was booting into the zfs appliance setup and I could configure it.  Also on KVM adding SATA disks worked and the zfs appliance interface could use them for pools.

Update 04.23.14
I recently tried to update a simulator running on KVM but the update did not want to complete. Not sure if too many versions elapsed or something it does not like about being under KVM. Anyhow I tried moving an up to date (ak-2013.06.05.1.1) image on Virtualbox to KVM and that did work.

12
comments

May 14

ZFS on Linux resize rpool

In a previous article I setup Ubuntu 13.04 to run off a ZFS root pool.  During my setup I used 4G out of an 8G disk for rpool only and mentioned we can just resize later.   Turns out ZFS on linux have a bug and to get autoextend to work you need to do an extra step.

Note: Since I could not find a tool (including parted) to resize a ZFS physical partition; and my parition layout on the main disk was simple enough; I just ended up booting a livecd and deleting sda2 and creating it bigger.

Partitions before I started:

# fdisk -l
...
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63       96389       48163+  be  Solaris boot
/dev/sda2           96390     7903979     3903795   bf  Solaris

Partitions after I deleted and recreated sda2:

# fdisk -l
Disk /dev/sda: 8589 MB, 8589934592 bytes
...
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63       96389       48163+  be  Solaris boot
/dev/sda2           96390    16777215     8340413   bf  Solaris

First boot after partition change:

# df -h
Filesystem           Size  Used Avail Use% Mounted on
rootfs               3.7G  3.1G  580M  85% /

# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  3.69G  3.06G   638M    83%  1.00x  ONLINE  -

# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
rpool                3.06G   580M    31K  /rpool
rpool/ROOT           3.06G   580M    31K  /rpool/ROOT
rpool/ROOT/ubuntu-1  3.06G   580M  3.06G  /

Try a normal autoexpand:

# zpool get autoexpand rpool
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  off     default

# zpool set autoexpand=on rpool

# zpool get autoexpand rpool
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local

Then I tried a reboot to see if zfs will pickup the autoexpand change.  That did not work and rebooting is most likely not necessary at all.

I found a bug on ZFS on linux list:
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=commitdiff;h=3b2e400c94eb488cff53cf701554c26d5ebe52e4

Then tried onlining the rpool and it worked.

# zpool online -e rpool /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part2

# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  7.94G  3.07G  4.87G    38%  1.00x  ONLINE  -

# df -h | grep rpool
rpool/ROOT/ubuntu-1  7.9G  3.1G  4.8G  40% /
rpool                4.8G     0  4.8G   0% /rpool
rpool/ROOT           4.8G     0  4.8G   0% /rpool/ROOT

# df -h | grep rootfs
rootfs               7.9G  3.1G  4.8G  40% /

Comments Off on ZFS on Linux resize rpool
comments

May 14

Booting Ubuntu on a ZFS Root File System

I recently tried booting Ubuntu 13.04 with a zfs root file system; after the zfs 0.6.1 kernel module was released.  I set this up as a guest on Virtualbox.  Please read the instructions here for Ubuntu:

https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem

Above link is very accurate for Ubuntu but I had just a couple small differences for Ubuntu 13.04 64-bit.

System Requirements

  • 64-bit Ubuntu Live CD. (Not the alternate installer, and not the 32-bit installer!)
  • AMD64 or EM64T compatible computer. (ie: x86-64)
  • 8GB disk storage available.
  • 2GB memory minimum.
  • Virtualbox 4.2.12

Step 1: Prepare The Install Environment

1.1 Start the Ubuntu LiveCD and open a terminal at the desktop.

1.2 Switch to a text terminal using Control-F2.  Shutdown X and Unity.  Input these commands at the terminal prompt:

$ sudo -i
# /etc/init.d/lightdm stop
# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install debootstrap ubuntu-zfs

1.3 Check that the ZFS filesystem is installed and available:

# modprobe zfs
# dmesg | grep ZFS:
ZFS: Loaded module v0.6.1-rc14, ZFS pool version 5000, ZFS filesystem version 5

Step 2: Disk Partitioning

2.1 Use fdisk to create two partitions (100MB boot and 4G root) on the primary storage device.  You can expand the root volume later.  Device was /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252 in my case.  Your disk names will vary.

The partition table should look like this:

root@ubuntu:~# fdisk -l /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252
Disk /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252: 8589 MB, 8589934592 bytes
Device Boot Start End Blocks Id System
/dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1 * 63 96389 48163+ be Solaris boot
/dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part2 96390 7903979 3903795 bf Solaris

Step 3: Disk Formatting

3.1 Format the small boot partition created by Step 2.2 as a filesystem that has stage1 GRUB support like this:

# mke2fs -m 0 -L /boot/grub -j /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1

3.2 Create the root pool on the larger partition:

# zpool create -o ashift=9 rpool /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part2

3.3 Create a "ROOT" filesystem in the root pool:

# zfs create rpool/ROOT

3.4 Create a descendant filesystem for the Ubuntu system:

# zfs create rpool/ROOT/ubuntu-1

3.5 Dismount all ZFS filesystems.

# zfs umount -a

3.6 Set the mountpoint property on the root filesystem:

# zfs set mountpoint=/ rpool/ROOT/ubuntu-1

3.7 Set the bootfs property on the root pool.

# zpool set bootfs=rpool/ROOT/ubuntu-1 rpool

3.8 Export the pool:

# zpool export rpool

Don't skip this step. The system is put into an inconsistent state if this command fails or if you reboot at this point.

Step 4: System Installation

4.1 Import the pool:

# zpool import -d /dev/disk/by-id -R /mnt rpool

4.2 Mount the small boot filesystem for GRUB that was created in step 3.1:

# mkdir -p /mnt/boot/grub
# mount /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1 /mnt/boot/grub

4.4 Install the minimal system:

# debootstrap raring /mnt

Step 5: System Configuration

5.1 Copy these files from the LiveCD environment to the new system:

# cp /etc/hostname /mnt/etc/
# cp /etc/hosts /mnt/etc/

5.2 The /mnt/etc/fstab file should be empty except for a comment. Add this line to the /mnt/etc/fstab file:

/dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1  /boot/grub  auto  defaults  0  1

5.3 Edit the /mnt/etc/network/interfaces file so that it contains something like this:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

Customize this file if the new system is not a DHCP client on the LAN.  After you have rebooted and you want network-manager to manage your network connection remove eth0 again.

5.4 Make virtual filesystems in the LiveCD environment visible to the new system and chroot into it:

# mount --bind /dev  /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys  /mnt/sys
# chroot /mnt /bin/bash --login

5.5 Install PPA support in the chroot environment like this:

# locale-gen en_US.UTF-8
# apt-get update
# apt-get install ubuntu-minimal software-properties-common

5.6 Install ZFS in the chroot environment for the new system:

# apt-add-repository --yes ppa:zfs-native/stable
# apt-add-repository --yes ppa:zfs-native/grub
# apt-get update
# apt-get install --no-install-recommends linux-image-generic linux-headers-generic
# apt-get install ubuntu-zfs
# apt-get install grub2-common grub-pc
# apt-get install zfs-initramfs
# apt-get dist-upgrade

Choose /dev/sda if prompted to install the MBR loader.

5.7 Set a root password on the new system:

# passwd root

Hint: If you want the ubuntu-desktop package, then install it after the first reboot. If you install it now, then it will start several process that must be manually stopped before dismount.

Step 6: GRUB Installation

Remember: All of Step 6 depends on Step 5.4 and must happen inside the chroot environment.

6.1 Verify that the ZFS root filesystem is recognized by GRUB and that the ZFS modules for GRUB are installed:

# grub-probe /
zfs

# ls /boot/grub/i386-pc/zfs*
/boot/grub/i386-pc/zfs.mod  /boot/grub/i386-pc/zfsinfo.mod</code>

6.2 Refresh the initrd files:

# update-initramfs -c -k all

6.3 Update the boot configuration file:

# update-grub

Verify that boot=zfs appears in the boot configuration file:

# grep boot=zfs /boot/grub/grub.cfg
linux /ROOT/ubuntu-1/@/boot/vmlinuz-3.8.0-19-generic root=/dev/sda2 ro boot=zfs $bootfs quiet splash $vt_handoff
linux /ROOT/ubuntu-1/@/ROOT/ubuntu-1@//boot/vmlinuz-3.8.0-19-generic root=ZFS=rpool/ROOT/ubuntu-1/@/ROOT/ubuntu-1 ro boot=zfs $bootfs quiet splash $vt_handoff
linux /ROOT/ubuntu-1/@/ROOT/ubuntu-1@//boot/vmlinuz-3.8.0-19-generic root=ZFS=rpool/ROOT/ubuntu-1/@/ROOT/ubuntu-1 ro single nomodeset boot=zfs $bootfs

Update 1:

If you have issues booting on Ubuntu 13.04 I temporarily fixed two lines manually for the first boot and then once logged in I fixed /boot/grub/grub.cfg as follow for subsequent boots:

#linux /ROOT/ubuntu-1/@/ROOT/ubuntu-1@//boot/vmlinuz-3.8.0-19-generic root=ZFS=rpool/ROOT/ubuntu-1/@/ROOT/ubuntu-1 ro boot=zfs $bootfs quiet splash $vt_handoff
linux /ROOT/ubuntu-1/@/boot/vmlinuz-3.8.0-19-generic root=/dev/sda2 ro boot=zfs $bootfs quiet splash $vt_handoff

#initrd /ROOT/ubuntu-1/@/ROOT/ubuntu-1@//boot/initrd.img-3.8.0-19-generic
initrd /ROOT/ubuntu-1/@/boot/initrd.img-3.8.0-19-generic

Update 2:

Grub issue on Ubuntu 13.04 is being worked: https://github.com/zfsonlinux/zfs/issues/1441

6.4 Now install the boot loader to the MBR like this:

# grub-install $(readlink -f /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252)
Installation finished. No error reported.

Do not reboot the computer until you get exactly that result message. Note that you are installing the loader to the whole disk, not a partition.

Note: The readlink is required because recent GRUB releases do not dereference symlinks.

Step 7: Cleanup and First Reboot

7.1 Exit from the chroot environment back to the LiveCD environment:

# exit

7.2 Run these commands in the LiveCD environment to dismount all filesystems:

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool

The zpool export command must succeed without being forced or the new system will fail to start.  If you have problems exporting rpool make sure you really did unmount all the file systems above.

1
comments