Category: ZFS

Dec 22

ZFS Send To Encrypted Volume

Replication from unencrypted to encrypted set

This is a POC testing ZFS (unencrypted zvols) from a server to another server (encrypted zvols). Using an old laptop as a target with the encrypted zvols.

On the target I first replicated existing large datasets I already had from a test, to an encrypted zpool to seed the data.

WARNING:

  • saving the encryption key on the file system is not safe
  • losing your encryption key means losing your data permanently

create encrypted zvol on target

# zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt TANK/ENCRYPTED
Enter passphrase: 
Re-enter passphrase: 

Seed one snapshot source DATA zvol as a test

using 4.57G only

# zfs send -v TANK/DATA@2020-12-19_06.45.01--2w | zfs recv -x encryption TANK/ENCRYPTED/DATA
full send of TANK/DATA@2020-12-19_06.45.01--2w  estimated size is 4.52G
total estimated size is 4.52G
TIME        SENT   SNAPSHOT     TANK/DATA@2020-12-19_06.45.01--2w
08:39:06   34.4M   TANK/DATA@2020-12-19_06.45.01--2w
08:39:07    115M   TANK/DATA@2020-12-19_06.45.01--2w
08:39:08    279M   TANK/DATA@2020-12-19_06.45.01--2w
...
08:40:49   4.52G   TANK/DATA@2020-12-19_06.45.01--2w
08:40:50   4.54G   TANK/DATA@2020-12-19_06.45.01--2w

# zfs list TANK/ENCRYPTED/DATA
NAME                  USED  AVAIL     REFER  MOUNTPOINT
TANK/ENCRYPTED/DATA  4.59G  1017G     4.57G     /TANK/ENCRYPTED/DATA

# zfs list -t snapshot TANK/ENCRYPTED/DATA
NAME                                          USED  AVAIL     REFER     MOUNTPOINT
TANK/ENCRYPTED/DATA@2020-12-19_06.45.01--2w  17.4M      -     4.57G  -

Seed all snapshots source DATA zvol

ends up using 22G

# zfs destroy TANK/ENCRYPTED/DATA
cannot destroy 'TANK/ENCRYPTED/DATA': filesystem has children
use '-r' to destroy the following datasets:
TANK/ENCRYPTED/DATA@2020-12-19_06.45.01--2w

# zfs destroy -r TANK/ENCRYPTED/DATA

# zfs send -R TANK/DATA@2020-12-19_06.45.01--2w | zfs recv -x encryption TANK/ENCRYPTED/DATA

# zfs list TANK/ENCRYPTED/DATA
NAME                  USED  AVAIL     REFER  MOUNTPOINT
TANK/ENCRYPTED/DATA  22.9G   999G     4.57G  /TANK/ENCRYPTED/DATA

# zfs list -t snapshot TANK/ENCRYPTED/DATA | tail -2
TANK/ENCRYPTED/DATA@2020-12-17_06.45.01--2w  11.2M      -     4.57G  -
TANK/ENCRYPTED/DATA@2020-12-19_06.45.01--2w  11.3M      -     4.57G  -

Create ARCHIVE zvol

# zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt TANK/ENCRYPTED/ARCHIVE
Enter passphrase: 
Re-enter passphrase: 

Seed ARCHIVE/MyDocuments

# zfs send -R TANK/ARCHIVE/MyDocuments@2020-12-18_20.15.01--2w | zfs recv -x encryption TANK/ENCRYPTED/ARCHIVE/MyDocuments

Test sending src zvol from source to target (via ssh)

NOTE: Loading the key manually. Will try automatically later.

on target:
# zfs destroy TANK/ENCRYPTED/ARCHIVE/src@2020-12-19_20.15.01--2w

on source:
# zfs send -i TANK/ARCHIVE/src@2020-12-18_20.15.01--2w TANK/ARCHIVE/src@2020-12-19_20.15.01--2w | ssh rrosso@192.168.1.79 sudo zfs recv -x encryption TANK/ENCRYPTED/ARCHIVE/src
cannot receive incremental stream: inherited key must be loaded

on target:
# zfs load-key -r TANK/ENCRYPTED
Enter passphrase for 'TANK/ENCRYPTED': 
Enter passphrase for 'TANK/ENCRYPTED/ARCHIVE': 
2 / 2 key(s) successfully loaded

# zfs rollback TANK/ENCRYPTED/ARCHIVE/src@2020-12-18_20.15.01--2w

on source:
# zfs send -i TANK/ARCHIVE/src@2020-12-18_20.15.01--2w TANK/ARCHIVE/src@2020-12-19_20.15.01--2w | ssh rrosso@192.168.1.79 sudo zfs recv -x encryption TANK/ENCRYPTED/ARCHIVE/src

on target:
# zfs list -t snapshot TANK/ENCRYPTED/ARCHIVE/src | tail -2
TANK/ENCRYPTED/ARCHIVE/src@2020-12-18_20.15.01--2w  1.87M      -      238M  -
TANK/ENCRYPTED/ARCHIVE/src@2020-12-19_20.15.01--2w     0B      -      238M  -

Test using key from a file

NOTE: Do this at your own risk. Key loading should probably be done from remote KMS or something safer.

on target:
# ls -l .zfs-key 
-rw-r--r-- 1 root root 9 Dec 21 12:49 .zfs-key

on source:
# ssh rrosso@192.168.1.79 sudo zfs load-key -L file:///root/.zfs-key TANK/ENCRYPTED
# ssh rrosso@192.168.1.79 sudo zfs load-key -L file:///root/.zfs-key TANK/ENCRYPTED/ARCHIVE

on target:
# zfs get all TANK/ENCRYPTED | egrep "encryption|keylocation|keyformat|encryptionroot|keystatus"
TANK/ENCRYPTED  encryption            aes-256-gcm            -
TANK/ENCRYPTED  keylocation           prompt                 local
TANK/ENCRYPTED  keyformat             passphrase             -
TANK/ENCRYPTED  encryptionroot        TANK/ENCRYPTED         -
TANK/ENCRYPTED  keystatus             available              -

# zfs get all TANK/ENCRYPTED/ARCHIVE | egrep "encryption|keylocation|keyformat|encryptionroot|keystatus"
TANK/ENCRYPTED/ARCHIVE  encryption            aes-256-gcm              -
TANK/ENCRYPTED/ARCHIVE  keylocation           prompt                   local
TANK/ENCRYPTED/ARCHIVE  keyformat             passphrase               -
TANK/ENCRYPTED/ARCHIVE  encryptionroot        TANK/ENCRYPTED/ARCHIVE   -
TANK/ENCRYPTED/ARCHIVE  keystatus             available                -

** now test with my replication (send/recv) script

Comments Off on ZFS Send To Encrypted Volume
comments

May 20

Systemctl With Docker and ZFS

I previously wrote about Ubuntu 20.04 as a rpool(boot volume) on OCI (Oracle Cloud Infrastucture). If using a ZFS rpool you probably wont have this silly race condition I am writing about here.

So for this POC I was using Docker and an isci mounted disk for the Docker root folder. Unfortunately there are a couple issues. The first one not related to Docker just booting up and the zpool not being imported. Fix A is for that. The second issue is that Docker may not wait for the zpool to be ready before it starts and just automatically lay down its docker folder you specified in daemon.json. And of course then zfs will not mount even if it was imported with fix A.

Fix A

If you don't know yet please create your zpool with the by-id device name not for example /dev/sdb. If this zpool was already created you can fix this after the fact with export and import and updating the cache.

You can look at systemctl status zfs-import-cache.service to see what happened at boot with this zpool. There are many opinions on how to fix this; suffice to say this is what I used and it works reliably for me so far.

Create service

# cat /etc/systemd/system/tank01-pool.service
[Unit]
Description=Zpool start service
After=dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407.device

[Service]
Type=simple
ExecStart=/usr/sbin/zpool import tank01
ExecStartPost=/usr/bin/logger "started ZFS pool tank01"

[Install]
WantedBy=dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407.device

# systemctl daemon-reload
# systemctl enable tank01-pool.service

# systemctl status tank01-pool.service
● tank01-pool.service - Zpool start service
     Loaded: loaded (/etc/systemd/system/tank01-pool.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Tue 2020-05-19 02:18:05 UTC; 5min ago
   Main PID: 1018 (code=exited, status=0/SUCCESS)

May 19 02:18:01 usph-vmli-do01 systemd[1]: Starting Zpool start service...
May 19 02:18:01 usph-vmli-do01 root[1019]: started ZFS pool tank01
May 19 02:18:01 usph-vmli-do01 systemd[1]: Started Zpool start service.
May 19 02:18:05 usph-vmli-do01 systemd[1]: tank01-pool.service: Succeeded.

To find your exact device

# systemctl list-units --all --full | grep disk | grep tank01
      dev-disk-by\x2did-scsi\x2d36081a22b818449d287b13b59a47bc407\x2dpart1.device                                                                                       loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407\x2dpart1.device                                                                                       loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dlabel-tank01.device                                                                                                                                loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpartlabel-zfs\x2d9eb05ecca4da97f6.device                                                                                                           loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpartuuid-d7d69ee0\x2d4e45\x2d3148\x2daa7a\x2d7cf375782813.device                                                                                   loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpath-ip\x2d169.254.2.2:3260\x2discsi\x2diqn.2015\x2d12.com.oracleiaas:16bca793\x2dc861\x2d49e8\x2da903\x2dd6b3809fe694\x2dlun\x2d1\x2dpart1.device loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2duuid-9554707573611221628.device                                                                                                                    loaded    active   plugged   BlockVolume tank01                                                           

# ls -l /dev/disk/by-id/ | grep sdb
    lrwxrwxrwx 1 root root  9 May 18 22:32 scsi-36081a22b818449d287b13b59a47bc407 -> ../../sdb
    lrwxrwxrwx 1 root root 10 May 18 22:32 scsi-36081a22b818449d287b13b59a47bc407-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 May 18 22:33 scsi-36081a22b818449d287b13b59a47bc407-part9 -> ../../sdb9
    lrwxrwxrwx 1 root root  9 May 18 22:32 wwn-0x6081a22b818449d287b13b59a47bc407 -> ../../sdb
    lrwxrwxrwx 1 root root 10 May 18 22:32 wwn-0x6081a22b818449d287b13b59a47bc407-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 May 18 22:33 wwn-0x6081a22b818449d287b13b59a47bc407-part9 -> ../../sdb9

Fix B

This was done before and just showing for reference how you enable the docker zfs storage.

# cat /etc/docker/daemon.json
{ 
  "storage-driver": "zfs",
  "data-root": "/tank01/docker"
}

For the timing issue you have many options in systemctl and probably better than this. For me just delaying a little until isci and zpool import/mount is done works OK.

# grep sleep /etc/systemd/system/multi-user.target.wants/docker.service 
ExecStartPre=/bin/sleep 60

# systemctl daemon-reload

Comments Off on Systemctl With Docker and ZFS
comments

Feb 21

Ubuntu server 20.04 zfs root and OCI

My experiment to:

  • create an Ubuntu 20.04 (not final release as of Feb 20) server in Virtualbox
  • setup server with a ZFS root disk
  • enable serial console
  • Virtualbox export to OCI

As you probably know newer desktop versions of Ubuntu will offer ZFS for root volume during installation. I am not sure if that is true in Ubuntu 20.04 server installs and when I looked at the 19.10 ZFS installation I did not necessarily want to use the ZFS layout they did. My experiment is my custom case and also tested on Ubuntu 16.04 and 18.04.

Note this is an experiment and ZFS layout, boot partition type, LUKS, EFI, multiple boot disks(mirrored) and netplan are all debatable configurations. Mine may not be ideal but it works for my use case.

Also my goal here was to export a bootable/usable OCI (Oracle Cloud Infrastructure) compute instance.

Start by booting a recent desktop live CD. Since I am testing 20.04 (focal) I used that. In the live cd environment open a terminal, sudo and apt install ssh. Start the ssh service and set ubuntu user password.

$ ssh ubuntu@192.168.1.142
$ sudo -i

##
apt-add-repository universe
apt update
apt install --yes debootstrap gdisk zfs-initramfs

## find correct device name for below
DISK=/dev/disk/by-id/ata-VBOX_HARDDISK_VB26c080f2-2bd16227
USER=ubuntu
HOST=server
POOL=ubuntu

##
sgdisk --zap-all $DISK
sgdisk --zap-all $DISK
sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
sgdisk     -n3:0:+1G      -t3:BF01 $DISK
sgdisk     -n4:0:0        -t4:BF01 $DISK
sgdisk --print $DISK

##
zpool create -o ashift=12 -d \
    -o feature@async_destroy=enabled \
    -o feature@bookmarks=enabled \
    -o feature@embedded_data=enabled \
    -o feature@empty_bpobj=enabled \
    -o feature@enabled_txg=enabled \
    -o feature@extensible_dataset=enabled \
    -o feature@filesystem_limits=enabled \
    -o feature@hole_birth=enabled \
    -o feature@large_blocks=enabled \
    -o feature@lz4_compress=enabled \
    -o feature@spacemap_histogram=enabled \
    -o feature@userobj_accounting=enabled \
    -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
    -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/ -R /mnt bpool ${DISK}-part3

zpool create -o ashift=12 \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/ -R /mnt rpool ${DISK}-part4

zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT

zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu

zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu
zfs mount bpool/BOOT/ubuntu

## Note: I skipped creating datasets for home, root, var/lib/ /var/log etc etc

##
debootstrap focal /mnt
zfs set devices=off rpool

## 
cat > /mnt/etc/netplan/01-netcfg.yaml<< EOF
network:
  version: 2
  ethernets:
    enp0s3:
      dhcp4: true
EOF

##
cat > /mnt/etc/apt/sources.list<< EOF
deb http://archive.ubuntu.com/ubuntu focal main universe
EOF

##
mount --rbind /dev  /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys  /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK bash --login

##
locale-gen --purge en_US.UTF-8
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales
echo US/Central > /etc/timezone    
dpkg-reconfigure -f noninteractive tzdata

##
passwd

##
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes grub-pc
grub-probe /boot

##update-initramfs -u -k all  <- this does not work. try below 
KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

# edit /etc/default/grub
GRUB_DEFAULT=0
#GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=5
GRUB_CMDLINE_LINUX_DEFAULT=
GRUB_CMDLINE_LINUX=root=ZFS=rpool/ROOT/ubuntu console=tty1 console=ttyS0,115200
GRUB_TERMINAL=serial console
GRUB_SERIAL_COMMAND=serial --unit=0 --speed=115200

##
update-grub
grub-install $DISK

##
cat > /etc/systemd/system/zfs-import-bpool.service<< EOF
[Unit]
  DefaultDependencies=no
  Before=zfs-import-scan.service
  Before=zfs-import-cache.service

[Service]
  Type=oneshot
  RemainAfterExit=yes
  ExecStart=/sbin/zpool import -N -o cachefile=none bpool

[Install]
  WantedBy=zfs-import.target
EOF

systemctl enable zfs-import-bpool.service

##
zfs set mountpoint=legacy bpool/BOOT/ubuntu
echo bpool/BOOT/ubuntu /boot zfs \
    nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
zfs snapshot bpool/BOOT/ubuntu@install
zfs snapshot rpool/ROOT/ubuntu@install

##
systemctl enable serial-getty@ttyS0
apt install ssh
systemctl enable ssh

##
exit
##
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

** reboot

** detach live cd

NOTE: grub had a prompt on reboot but no options. try below

KERNEL=ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'
update-initramfs -u -k $KERNEL
update-grub

REF: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS

Comments Off on Ubuntu server 20.04 zfs root and OCI
comments

Apr 30

ZFSSA List Replication Actions Status

Using the ZFS appliance REST API to take a quick look at all replication actions and check on progress of long running jobs.

# python zfssa_status_replication_v1.0.py

List Replicated Project Snapshots -- PST Run Date 2017-04-30 06:42:59.386738
date                       target project    pool      bytes_sent      estimated_size  estimated_time_left average_throughput
2017-04-30 07:20:04.232975 zfs2   EBSPRD     POOL1     6.78G           21.3G           01:00:35            4MB/s
2017-04-30 06:42:59.386738 zfs3   EBSPRD     POOL2     0               0               00:00:00            0B/s           
<snip>
# cat zfssa_status_replication_v1.0.py 
#!/usr/bin/env python

# Version 1.0
import sys
import requests, json, os
import datetime

requests.packages.urllib3.disable_warnings()
dt = datetime.datetime.now()

# ZFSSA API URL
url = "https://zfs1:215"

# ZFSSA authentication credentials, it reads username and password from environment variables ZFSUSER and ZFSPASSWORD
#zfsauth = (os.getenv('ZFSUSER'), os.getenv('ZFSPASSWORD'))
zfsauth = ('ROuser','password')

jsonheader={'Content-Type': 'application/json'}

def list_replication_actions_status():
  r = requests.get("%s/api/storage/v1/replication/actions" % (url), auth=zfsauth, verify=False, headers=jsonheader)
  if r.status_code != 200:
    print("Error getting actions %s %s" % (r.status_code, r.text))
  else:
   j = json.loads(r.text)
   #print j
   for action in j["actions"]:
     #print action
     print("{} {:15} {:10} {:15} ".format(dt, action["target"], action["project"], action["pool"])),
     show_one_replication_action(action["id"])

def show_one_replication_action(id):
  r = requests.get("%s/api/storage/v1/replication/actions/%s" % (url,id), auth=zfsauth, verify=False, headers=jsonheader)
  if r.status_code != 200:
    print("Error getting status %s %s" % (r.status_code, r.text))
  else:
   j = json.loads(r.text)
   #print j
   print("{:15} {:15} {:19} {:15}".format(j["action"]["bytes_sent"], j["action"]["estimated_size"], j["action"]["estimated_time_left"], j["action"]["average_throughput"]))

print ("\nList Replicated Project Snapshots -- PST Run Date %s" % dt)
print('{:26} {:15} {:10} {:16} {:15} {:15} {:16} {}').format('date','target','project','pool','bytes_sent','estimated_size','estimated_time_left','average_throughput')
list_replication_actions_status()

Comments Off on ZFSSA List Replication Actions Status
comments

May 09

Migrating Ubuntu On a ZFS Root File System

I have written a couple articles about this here http://blog.ls-al.com/ubuntu-on-a-zfs-root-file-system-for-ubuntu-15-04/ and here http://blog.ls-al.com/ubuntu-on-a-zfs-root-file-system-for-ubuntu-14-04/

This is a quick update. After using virtualbox to export and import on a new machine my guest did not boot up all the way. I suspect I was just not seeing the message about manual/skip check of a file system and that the fstab entry for sda1 changed. Here is what I did. On bootup try "S" for skip if you are stuck. In my case I was stuck after a message about enabling encryption devices or something to that effect.

Check fstab and note disk device name.

root@ubuntu:~# cat /etc/fstab
/dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0-part1 /boot/grub auto defaults 0 1 

Check if this device exists.

root@ubuntu:~# ls -l /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0*
ls: cannot access /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0*: No such file or directory

What is the correct device name.

root@ubuntu:~# ls -l /dev/disk/by-id/ata-VBOX_HARDDISK*                    
lrwxrwxrwx 1 root root  9 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528 -> ../../sda
lrwxrwxrwx 1 root root 10 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part2 -> ../../sda2

Keep old fstab and update with correct name.

root@ubuntu:~# cp /etc/fstab /root

root@ubuntu:~# vi /etc/fstab
root@ubuntu:~# sync
root@ubuntu:~# diff /etc/fstab /root/fstab 
1c1
< /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part1 /boot/grub auto defaults 0 1 
---
> /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0-part1 /boot/grub auto defaults 0 1 

Try rebooting now.

Comments Off on Migrating Ubuntu On a ZFS Root File System
comments

Apr 20

Ubuntu ZFS replication

Most of you will know that Ubuntu 16.04 will have ZFS merged into the kernel. Despite licensing arguments I see this as a positive move. I recently tested btrfs replication (http://blog.ls-al.com/btrfs-replication/) but being a long time Solaris admin and understanding how easy ZFS makes things I welcome this development. Here is a quick test of ZFS replication between two Ubuntu 16.04 hosts.

Install zfs utils on both hosts.

# apt-get install zfsutils-linux

Quick and dirty create zpools using an image just for the test.

root@u1604b1-m1:~# dd if=/dev/zero of=/tank1.img bs=1G count=1 &> /dev/null
root@u1604b1-m1:~# zpool create tank1 /tank1.img 
root@u1604b1-m1:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank1  1008M    50K  1008M         -     0%     0%  1.00x  ONLINE  -

root@u1604b1-m2:~# dd if=/dev/zero of=/tank1.img bs=1G count=1 &> /dev/null
root@u1604b1-m2:~# zpool create tank1 /tank1.img
root@u1604b1-m2:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank1  1008M    64K  1008M         -     0%     0%  1.00x  ONLINE  -
root@u1604b1-m2:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank1    55K   976M    19K  /tank1

Copy a file into the source file system.

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/W.pdf /tank1/
root@u1604b1-m1:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 12M Apr 20 19:22 W.pdf

Take a snapshot.

root@u1604b1-m1:~# zfs snapshot tank1@snapshot1
root@u1604b1-m1:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1      0      -  11.2M  -

Verify empty target

root@u1604b1-m2:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank1    55K   976M    19K  /tank1

root@u1604b1-m2:~# zfs list -t snapshot
no datasets available

Send initial

root@u1604b1-m1:~# zfs send tank1@snapshot1 | ssh root@192.168.2.29 zfs recv tank1
root@192.168.2.29's password: 
cannot receive new filesystem stream: destination 'tank1' exists
must specify -F to overwrite it
warning: cannot send 'tank1@snapshot1': Broken pipe

root@u1604b1-m1:~# zfs send tank1@snapshot1 | ssh root@192.168.2.29 zfs recv -F tank1
root@192.168.2.29's password: 

Check target.

root@u1604b1-m2:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1      0      -  11.2M  -
root@u1604b1-m2:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 12M Apr 20 19:22 W.pdf

Lets populate one more file and take a new snapshot.

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/S.pdf /tank1
root@u1604b1-m1:~# zfs snapshot tank1@snapshot2

Incremental send

root@u1604b1-m1:~# zfs send -i tank1@snapshot1 tank1@snapshot2 | ssh root@192.168.2.29 zfs recv tank1
root@192.168.2.29's password: 

Check target

root@u1604b1-m2:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 375K Apr 20 19:27 S.pdf
-rwxr-x--- 1 root root  12M Apr 20 19:22 W.pdf

root@u1604b1-m2:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1     9K      -  11.2M  -
tank1@snapshot2      0      -  11.5M  -

Comments Off on Ubuntu ZFS replication
comments

Mar 12

Btrfs Replication

Since btrfs has send and receive capabilities I took a look at it. The title is replication but if you are interested in enterprise level sophisticated storage level replication for disaster recover or better yet mature data set cloning for non production instances you will need to look further. For example the Oracle ZFS appliance has a mature replication engine built on send and receive but handles all the replication magic for you. I am not aware of commercial solutions built on btrfs that has the mature functionality the ZFS appliance can offer yet. Note we are not just talking replication but also snapshot cloning, sharing and protection of snapshots on the target end. So for now here is what I have tested for pure btrfs send and receive.

Some details on machine 1:

root@u1604b1-m1:~# more /etc/issue
Ubuntu Xenial Xerus (development branch) \n \l

root@u1604b1-m1:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
[..]
/dev/sda1       7.5G  3.9G  3.1G  56% /
/dev/sda1       7.5G  3.9G  3.1G  56% /home
[..]

root@u1604b1-m1:~# mount
[..]
/dev/sda1 on / type btrfs (rw,relatime,space_cache,subvolid=257,subvol=/@)
/dev/sda1 on /home type btrfs (rw,relatime,space_cache,subvolid=258,subvol=/@home)
[..]

root@u1604b1-m1:~# btrfs --version
btrfs-progs v4.4

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 47 top level 5 path @
ID 258 gen 47 top level 5 path @home

Test ssh to machine 2:

root@u1604b1-m1:~# ssh root@192.168.2.29 uptime
root@192.168.2.29's password: 
 10:33:23 up 5 min,  1 user,  load average: 0.22, 0.37, 0.19

Machine 2 subvolumes before we receive:

root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 40 top level 5 path @
ID 258 gen 40 top level 5 path @home

Create a subvolume, add a file and take a snapshot:

root@u1604b1-m1:~# btrfs subvolume create /tank1
Create subvolume '//tank1'

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 53 top level 5 path @
ID 258 gen 50 top level 5 path @home
ID 264 gen 53 top level 257 path tank1

root@u1604b1-m1:~# ls /tank1

root@u1604b1-m1:~# touch /tank1/rr_test1
root@u1604b1-m1:~# ls -l /tank1/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

root@u1604b1-m1:~# btrfs subvolume snapshot /tank1 /tank1_snapshot
Create a snapshot of '/tank1' in '//tank1_snapshot'

root@u1604b1-m1:~# ls -l /tank1_snapshot/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 63 top level 5 path @
ID 258 gen 58 top level 5 path @home
ID 264 gen 59 top level 257 path tank1
ID 265 gen 59 top level 257 path tank1_snapshot

Delete a snapshot:

root@u1604b1-m1:~# btrfs subvolume delete /tank1_snapshot
Delete subvolume (no-commit): '//tank1_snapshot'

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 64 top level 5 path @
ID 258 gen 58 top level 5 path @home
ID 264 gen 59 top level 257 path tank1

Take a read-only snapshot and send to machine 2:

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot

Create a readonly snapshot of '/tank1' in '//tank1_snapshot'

root@u1604b1-m1:~# btrfs send /tank1_snapshot | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot
root@192.168.2.29's password: 
At subvol tank1_snapshot

Machine 2 after receiving snapshot1:

[bash]
root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 61 top level 5 path @
ID 258 gen 60 top level 5 path @home
ID 264 gen 62 top level 257 path tank1_snapshot

root@u1604b1-m2:~# ls -l /tank1_snapshot/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

Create one more file:

root@u1604b1-m1:~# touch /tank1/rr_test2

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot2
Create a readonly snapshot of '/tank1' in '//tank1_snapshot2'

root@u1604b1-m1:~# btrfs send /tank1_snapshot2 | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot2
root@192.168.2.29's password: 
At subvol tank1_snapshot2

Machine 2 after receiving snapshot 2:

root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 65 top level 5 path @
ID 258 gen 60 top level 5 path @home
ID 264 gen 62 top level 257 path tank1_snapshot
ID 265 gen 66 top level 257 path tank1_snapshot2

root@u1604b1-m2:~# ls -l /tank1_snapshot2/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root 0 Mar 10 10:53 rr_test2

Incremental send(adding -v for now to see more detail):

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot3
Create a readonly snapshot of '/tank1' in '//tank1_snapshot3'

root@u1604b1-m1:~# btrfs send -vp /tank1_snapshot2 /tank1_snapshot3 | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot3
BTRFS_IOC_SEND returned 0
joining genl thread
root@192.168.2.29's password: 
At snapshot tank1_snapshot3

Using larger files to see effect of incremental better:

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/ISO/ubuntu-gnome-15.10-desktop-amd64.iso /tank1/
root@u1604b1-m1:~# du -sh /tank1
1.1G	/tank1

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot6
Create a readonly snapshot of '/tank1' in '//tank1_snapshot6'

root@u1604b1-m1:~# time btrfs send -vp /tank1_snapshot5 /tank1_snapshot6 | ssh root@192.168.2.29 "btrfs receive -v /"  
At subvol /tank1_snapshot6
root@192.168.2.29's password: 
receiving snapshot tank1_snapshot6 uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, ctransid=272 parent_uuid=ec3f1fb5-9bed-3e4c-9c5b-a6c586b10531, parent_ctransid=201
BTRFS_IOC_SEND returned 0
joining genl thread
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, stransid=272
At snapshot tank1_snapshot6

real	1m10.578s
user	0m0.696s
sys	0m16.064s

Machine 2 after snapshot6:

total 1.1G
-rw-r--r-- 1 root root    0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root   22 Mar 10 12:19 rr_test2
-rwxr-x--- 1 root root 1.1G Mar 10 13:04 ubuntu-gnome-15.10-desktop-amd64.iso
root@u1604b1-m1:~# cp /media/sf_E_DRIVE/ISO/ubuntu-gnome-15.10-desktop-i386.iso /tank1/

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot7
Create a readonly snapshot of '/tank1' in '//tank1_snapshot7'

root@u1604b1-m1:~# time btrfs send -vp /tank1_snapshot6 /tank1_snapshot7 | ssh root@192.168.2.29 "btrfs receive -v /" 
At subvol /tank1_snapshot7
root@192.168.2.29's password: 
receiving snapshot tank1_snapshot7 uuid=5c255311-0f60-4149-91f7-99d9d5acf64c, ctransid=276 parent_uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, parent_ctransid=272

BTRFS_IOC_SEND returned 0
joining genl thread
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=5c255311-0f60-4149-91f7-99d9d5acf64c, stransid=276
At snapshot tank1_snapshot7

real	1m17.393s
user	0m0.640s
sys	0m16.716s

Machine 2 after snapshot7:

[bash]
root@u1604b1-m2:~# ls -lh /tank1_snapshot7
total 2.0G
-rw-r--r-- 1 root root    0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root   22 Mar 10 12:19 rr_test2
-rwxr-x--- 1 root root 1.1G Mar 10 13:04 ubuntu-gnome-15.10-desktop-amd64.iso
-rwxr-x--- 1 root root 1.1G Mar 10 13:07 ubuntu-gnome-15.10-desktop-i386.iso

Experiment: On the target I sent a snapshot to a new btrfs subvolume. So in effect become independent. This does not really help us with cloning since with large datasets it takes too long as well as duplicate the space which nullifies why we like COW.

root@u1604b1-m2:~# btrfs subvolume create /tank1_clone
Create subvolume '//tank1_clone'

root@u1604b1-m2:~# btrfs send /tank1_snapshot3 | btrfs receive  /tank1_clone
At subvol /tank1_snapshot3
At subvol tank1_snapshot3

This was just my initial look see on what btrfs is capable of and how similar it is to ZFS and ZFS appliance functionality.

So far at least it seems promising that send and receive is being addressed in btrfs but I don't think you can easily roll your own solution for A) replication and B) writable snapshots(clones) with btrfs yet. There will be too much work around building the replication discipline and framework.

A few links that I came across that are useful while looking at btrfs and the topics of replication and database cloning.

1. http://rockstor.com/blog/snapshots/data-replication-with-rockstor/
2. http://blog.contractoracle.com/2013/02/oracle-database-on-btrfs-reduce-costs.html
3. http://www.cybertec.at/2015/01/forking-databases-the-art-of-copying-without-copying/
4. http://blog.contractoracle.com/2013/02/oracle-database-on-btrfs-reduce-costs.html
5. https://bdrouvot.wordpress.com/2014/04/25/reduce-resource-consumption-and-clone-in-seconds-your-oracle-virtual-environment-on-your-laptop-using-linux-containers-and-btrfs/
6. https://docs.opensvc.com/storage.btrfs.html
7. https://ilmarkerm.eu/blog/2014/08/cloning-pluggable-database-with-custom-snapshot/
8. http://blog.ronnyegner-consulting.de/2010/02/17/creating-database-clones-with-zfs-really-fast/
9. http://www.seedsofgenius.net/solaris/zfs-vs-btrfs-a-reference

Comments Off on Btrfs Replication
comments

Aug 27

ZFS Storage Appliance RESTful API

Until now I have used ssh and javascript to do some of the more advanced automation tasks like snapshots, cloning and replication. I am starting to look at porting to REST and here is a quick example of two functions.

** I needed fabric and python-requests linux packages installed for python.

#!/usr/bin/env fab
 
from fabric.api import task,hosts,settings,env
from fabric.utils import abort
import requests, json, os
from datetime import date

#requests.packages.urllib3.disable_warnings()  
today = date.today()
 
# ZFSSA API URL
url = "https://192.168.2.200:215"
 
# ZFSSA authentication credentials, it reads username and password from environment variables ZFSUSER and ZFSPASSWORD
zfsauth = (os.getenv('ZFSUSER'), os.getenv('ZFSPASSWORD'))
 
jsonheader={'Content-Type': 'application/json'}

# This gets the pool list
def list_pools():
  r = requests.get("%s/api/storage/v1/pools" % (url), auth=zfsauth, verify=False, headers=jsonheader)
  if r.status_code != 200:
    abort("Error getting pools %s %s" % (r.status_code, r.text))
  j = json.loads(r.text) 
  #print j

  for pool in j["pools"]:
    #print pool
    #{u'status': u'online', u'profile': u'stripe', u'name': u'tank1', u'owner': u'zfsapp1', u'usage': {}, u'href': u'/api/storage/v1/pools/tank1', u'peer': u'00000000-0000-0000-0000-000000000000', u'asn': u'91bdcaef-fea5-e796-8793-f2eefa46200a'}ation
    print "pool: %s and status: %s" % (pool["name"], pool["status"])


# Create project
def create_project(pool, projname):
  # First check if the target project name already exists
  r = requests.get("%s/api/storage/v1/pools/%s/projects/%s" % (url, pool, projname), auth=zfsauth, verify=False, headers=jsonheader)
  if r.status_code != 404:
    abort("ZFS project %s already exists (or other error): %s" % (projname, r.status_code))

  payload = { 'name': projname, 'sharenfs': 'ro' }
  r = requests.post("%s/api/storage/v1/pools/%s/projects" % (url, pool), auth=zfsauth, verify=False, data=json.dumps(payload), headers=jsonheader)
  if r.status_code == 201:
    print "project created"
  else:
    abort("Error creating project %s %s" % (r.status_code, r.text))

print "\n\nTest list pools and create a project\n"
list_pools()
create_project('tank1','proj-01')

References:
http://www.oracle.com/technetwork/articles/servers-storage-admin/zfs-appliance-scripting-1508184.html
http://docs.oracle.com/cd/E51475_01/html/E52433/
http://ilmarkerm.blogspot.com/2014/12/sample-code-using-oracle-zfs-storage.html

1
comments

Feb 18

Expanding a Solaris RPOOL

For reference I have a couple older articles on this topic here:

Growing a Solaris LDOM rpool

ZFS Grow rpool disk

Specifically this article is what I did recently on a SPARC LDOM to expand the RPOOL. The RPOOL OS disk is a SAN shared LUN in this case.

After growing the LUN to 50G on the back-end I did the following. You may have to try more than once. For me it did not work at first and I don't know the sequence but I tried a combination of reboot, zpool status, label and verify and it worked. And yes I did say zpool status. I have had issues with upgrades in the past where beadm did not activate a new environment and zpool status resolved it.

Also you will notice my boot partition was already an EFI label. I don't recall where but somewhere along the lines in Solaris 11.1 EFI labels became possible. If you have a SMI label you may have to try a different approach. And as always tinkering with partitions and disk labels is dangerous so you are warned.

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  29.8G  27.6G  2.13G  92%  1.00x  ONLINE  -

# format -e
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c1d0 <SUN-ZFS Storage 7330-1.0-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c1d1 <Unknown-Unknown-0001-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@1
Specify disk (enter its number): 0
selecting c1d0
[disk formatted]
/dev/dsk/c1d0s0 is part of active ZFS pool rpool. Please see zpool(1M).

[..]

format> verify

Volume name = <        >
ascii name  = <SUN-ZFS Storage 7330-1.0-50.00GB>
bytes/sector    =  512
sectors = 104857599
accessible sectors = 104857566
Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256       49.99GB          104841182
  1 unassigned    wm                 0           0               0
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  7 unassigned    wm                 0           0               0
  8   reserved    wm         104841183        8.00MB          104857566

[..]

format> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]:
Ready to label disk, continue? y

format> q

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  29.8G  27.6G  2.13G  92%  1.00x  ONLINE  -

# zpool set autoexpand=on rpool

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  49.8G  27.6G  22.1G  55%  1.00x  ONLINE  -

Comments Off on Expanding a Solaris RPOOL
comments

Jan 01

Ubuntu On a ZFS Root File System for Ubuntu 15.04

Start Update 03.18.15:
This is untested but I suspect if you are upgrading the kernel to 3.19.0 and you have issues you may need to change to the daily Vivid ppa. In my initial post I used stable and Utopic since Vivid was very new.
End Update 03.18.15:

This is what I did to make an Ubuntu 15.04 virtualbox guest (works for Ubuntu 14.10 also) boot with the ZFS file system.

Previous articles:

Ubuntu On a ZFS Root File System for Ubuntu 14.04

Booting Ubuntu on a ZFS Root File System

SYSTEM REQUIREMENTS
64-bit Ubuntu Live CD. (Not the alternate or 32-bit installer)
AMD64 or EM64T compatible computer. (ie: x86-64)
15GB disk, 2GB memory minimum, Virtualbox
Create new VM. I use bridged networking in case I want to use ssh during the setup.
Start the Ubuntu LiveCD and open a terminal at the desktop. I used the 15.04 64-bit alpha CD.
Control-F1 to first text terminal.

1. Setup repo.

$ sudo -i
# /etc/init.d/lightdm stop
# apt-add-repository --yes ppa:zfs-native/stable

** Change /etc/apt/sources.list.d/... down to trusty. Or utopic possibly. I did not test.

2. Install zfs.

# apt-get update
# apt-get install debootstrap ubuntu-zfs
# dmesg | grep ZFS:
[ 3900.114234] ZFS: Loaded module v0.6.3-4~trusty, ZFS pool version 5000, ZFS filesystem version 5

** takes a long time to compile initial module for 3.16.0-28-generic

3. Install ssh.

Using a ssh terminal makes it easier to copy and paste for both command execution and documentation. However with this bare bones environment at this point openssh might not install very clean. I played with it a little to get at least sshd to run.

# apt-get install ssh
# /etc/init.d/ssh start
# /usr/sbin/sshd

** check with ps if ssh process is running
** edit sshd_config and allow root login
** set root passwd

4. Setup disk partitions.

# fdisk -l
Disk /dev/loop0: 1 GiB, 1103351808 bytes, 2154984 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc2c7def9

Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 411647 409600 200M be Solaris boot
/dev/sda2 411648 31457279 31045632 14.8G bf Solaris

5. Format partitions.

# mke2fs -m 0 -L /boot/grub -j /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part1
# zpool create -o ashift=9 rpool /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2

6. ZFS Setup and Mountpoints.

# zpool create -o ashift=9 rpool /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2
# zfs create rpool/ROOT
# zfs create rpool/ROOT/ubuntu-1
# zfs umount -a
# zfs set mountpoint=/ rpool/ROOT/ubuntu-1
# zpool set bootfs=rpool/ROOT/ubuntu-1 rpool
# zpool export rpool
# zpool import -d /dev/disk/by-id -R /mnt rpool
# mkdir -p /mnt/boot/grub
# mount /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part1 /mnt/boot/grub

7. Install Ubuntu 15.04 on /mnt

# debootstrap vivid /mnt
I: Retrieving Release
...
I: Base system installed successfully.

# cp /etc/hostname /mnt/etc/
# cp /etc/hosts /mnt/etc/
# vi /mnt/etc/fstab
# cat /mnt/etc/fstab
/dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part1 /boot/grub auto defaults 0 1

# cat /mnt/etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

8. Setup chroot to update and install ubuntu-minimal

# mount --bind /dev /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys /mnt/sys
# chroot /mnt /bin/bash --login
# locale-gen en_US.UTF-8

# apt-get update
# apt-get install ubuntu-minimal software-properties-common

9. Setup ZOL repo

# apt-add-repository --yes ppa:zfs-native/stable

** leave grub repo off for now.

# cat /etc/apt/sources.list.d/zfs-native-ubuntu-stable-vivid.list
deb http://ppa.launchpad.net/zfs-native/stable/ubuntu trusty main
# deb-src http://ppa.launchpad.net/zfs-native/stable/ubuntu vivid main
# apt-get update
# apt-get install --no-install-recommends linux-image-generic linux-headers-generic

# apt-get install ubuntu-zfs

** skipped grub stuff for this pass

# apt-get install zfs-initramfs
# apt-get dist-upgrade

10. Make sure root has access

# passwd root

11. Test grub

# grub-probe /
bash: grub-probe: command not found

12. Use older patched grub from ZOL project

# apt-add-repository --yes ppa:zfs-native/grub
gpg: keyring `/tmp/tmp5urr4u7g/secring.gpg' created
gpg: keyring `/tmp/tmp5urr4u7g/pubring.gpg' created
gpg: requesting key F6B0FC61 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmp5urr4u7g/trustdb.gpg: trustdb created
gpg: key F6B0FC61: public key "Launchpad PPA for Native ZFS for Linux" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK

# cat /etc/apt/sources.list.d/zfs-native-ubuntu-grub-vivid.list
deb http://ppa.launchpad.net/zfs-native/grub/ubuntu raring main

# apt-get install grub2-common grub-pc
Installation finished. No error reported.
/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2'.

** As you can see grub has issues with dev path.

# ln -s /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2 /dev/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2
# apt-get install grub2-common grub-pc

# grub-probe /
zfs
# ls /boot/grub/i386-pc/zfs*
/boot/grub/i386-pc/zfscrypt.mod /boot/grub/i386-pc/zfsinfo.mod /boot/grub/i386-pc/zfs.mod

** Note at the end I show a udev rule that can help work around this path issue.

# update-initramfs -c -k all

# grep "boot=zfs" /boot/grub/grub.cfg
linux /ROOT/ubuntu-1@/boot/vmlinuz-3.16.0-28-generic root=ZFS=rpool/ROOT/ubuntu-1 ro boot=zfs quiet splash $vt_handoff

# grep "boot=zfs" /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash boot=zfs"

# update-grub
Generating grub configuration file ...
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-3.16.0-28-generic
Found initrd image: /boot/initrd.img-3.16.0-28-generic
done

# grub-install $(readlink -f /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52)
Installing for i386-pc platform.
Installation finished. No error reported.

# exit
logout

13. unmount chroot and shutdown

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool
# init 0

14. Cleanup and finish
** Create snapshot
** bootup
** install ssh and configure root to login in sshd_config, restart ssh

15. udev rule for grub bug

# cat /etc/udev/rules.d/70-zfs-grub-fix.rules
ENV{DEVTYPE}=="partition", IMPORT{parent}="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL} $env{ID_BUS}-$env{ID_SERIAL}-part%n"

# /etc/init.d/udev restart

16. install desktop software

# apt-get install ubuntu-desktop

Comments Off on Ubuntu On a ZFS Root File System for Ubuntu 15.04
comments