ZFSSA List Replication Actions Status

Using the ZFS appliance REST API to take a quick look at all replication actions and check on progress of long running jobs.

# python zfssa_status_replication_v1.0.py

List Replicated Project Snapshots -- PST Run Date 2017-04-30 06:42:59.386738
date                       target project    pool      bytes_sent      estimated_size  estimated_time_left average_throughput
2017-04-30 07:20:04.232975 zfs2   EBSPRD     POOL1     6.78G           21.3G           01:00:35            4MB/s
2017-04-30 06:42:59.386738 zfs3   EBSPRD     POOL2     0               0               00:00:00            0B/s           
<snip>
# cat zfssa_status_replication_v1.0.py 
#!/usr/bin/env python

# Version 1.0
import sys
import requests, json, os
import datetime

requests.packages.urllib3.disable_warnings()
dt = datetime.datetime.now()

# ZFSSA API URL
url = "https://zfs1:215"

# ZFSSA authentication credentials, it reads username and password from environment variables ZFSUSER and ZFSPASSWORD
#zfsauth = (os.getenv('ZFSUSER'), os.getenv('ZFSPASSWORD'))
zfsauth = ('ROuser','password')

jsonheader={'Content-Type': 'application/json'}

def list_replication_actions_status():
  r = requests.get("%s/api/storage/v1/replication/actions" % (url), auth=zfsauth, verify=False, headers=jsonheader)
  if r.status_code != 200:
    print("Error getting actions %s %s" % (r.status_code, r.text))
  else:
   j = json.loads(r.text)
   #print j
   for action in j["actions"]:
     #print action
     print("{} {:15} {:10} {:15} ".format(dt, action["target"], action["project"], action["pool"])),
     show_one_replication_action(action["id"])

def show_one_replication_action(id):
  r = requests.get("%s/api/storage/v1/replication/actions/%s" % (url,id), auth=zfsauth, verify=False, headers=jsonheader)
  if r.status_code != 200:
    print("Error getting status %s %s" % (r.status_code, r.text))
  else:
   j = json.loads(r.text)
   #print j
   print("{:15} {:15} {:19} {:15}".format(j["action"]["bytes_sent"], j["action"]["estimated_size"], j["action"]["estimated_time_left"], j["action"]["average_throughput"]))

print ("\nList Replicated Project Snapshots -- PST Run Date %s" % dt)
print('{:26} {:15} {:10} {:16} {:15} {:15} {:16} {}').format('date','target','project','pool','bytes_sent','estimated_size','estimated_time_left','average_throughput')
list_replication_actions_status()
Posted in ZFS

Migrating Ubuntu On a ZFS Root File System

I have written a couple articles about this here http://blog.ls-al.com/ubuntu-on-a-zfs-root-file-system-for-ubuntu-15-04/ and here http://blog.ls-al.com/ubuntu-on-a-zfs-root-file-system-for-ubuntu-14-04/

This is a quick update. After using virtualbox to export and import on a new machine my guest did not boot up all the way. I suspect I was just not seeing the message about manual/skip check of a file system and that the fstab entry for sda1 changed. Here is what I did. On bootup try “S” for skip if you are stuck. In my case I was stuck after a message about enabling encryption devices or something to that effect.

Check fstab and note disk device name.

root@ubuntu:~# cat /etc/fstab
/dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0-part1 /boot/grub auto defaults 0 1 

Check if this device exists.

root@ubuntu:~# ls -l /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0*
ls: cannot access /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0*: No such file or directory

What is the correct device name.

root@ubuntu:~# ls -l /dev/disk/by-id/ata-VBOX_HARDDISK*                    
lrwxrwxrwx 1 root root  9 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528 -> ../../sda
lrwxrwxrwx 1 root root 10 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part2 -> ../../sda2

Keep old fstab and update with correct name.

root@ubuntu:~# cp /etc/fstab /root

root@ubuntu:~# vi /etc/fstab
root@ubuntu:~# sync
root@ubuntu:~# diff /etc/fstab /root/fstab 
1c1
< /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part1 /boot/grub auto defaults 0 1 
---
> /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0-part1 /boot/grub auto defaults 0 1 

Try rebooting now.

Ubuntu ZFS replication

Most of you will know that Ubuntu 16.04 will have ZFS merged into the kernel. Despite licensing arguments I see this as a positive move. I recently tested btrfs replication (http://blog.ls-al.com/btrfs-replication/) but being a long time Solaris admin and understanding how easy ZFS makes things I welcome this development. Here is a quick test of ZFS replication between two Ubuntu 16.04 hosts.

Install zfs utils on both hosts.

# apt-get install zfsutils-linux

Quick and dirty create zpools using an image just for the test.

root@u1604b1-m1:~# dd if=/dev/zero of=/tank1.img bs=1G count=1 &> /dev/null
root@u1604b1-m1:~# zpool create tank1 /tank1.img 
root@u1604b1-m1:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank1  1008M    50K  1008M         -     0%     0%  1.00x  ONLINE  -

root@u1604b1-m2:~# dd if=/dev/zero of=/tank1.img bs=1G count=1 &> /dev/null
root@u1604b1-m2:~# zpool create tank1 /tank1.img
root@u1604b1-m2:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank1  1008M    64K  1008M         -     0%     0%  1.00x  ONLINE  -
root@u1604b1-m2:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank1    55K   976M    19K  /tank1

Copy a file into the source file system.

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/W.pdf /tank1/
root@u1604b1-m1:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 12M Apr 20 19:22 W.pdf

Take a snapshot.

root@u1604b1-m1:~# zfs snapshot tank1@snapshot1
root@u1604b1-m1:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1      0      -  11.2M  -

Verify empty target

root@u1604b1-m2:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank1    55K   976M    19K  /tank1

root@u1604b1-m2:~# zfs list -t snapshot
no datasets available

Send initial

root@u1604b1-m1:~# zfs send tank1@snapshot1 | ssh root@192.168.2.29 zfs recv tank1
root@192.168.2.29's password: 
cannot receive new filesystem stream: destination 'tank1' exists
must specify -F to overwrite it
warning: cannot send 'tank1@snapshot1': Broken pipe

root@u1604b1-m1:~# zfs send tank1@snapshot1 | ssh root@192.168.2.29 zfs recv -F tank1
root@192.168.2.29's password: 

Check target.

root@u1604b1-m2:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1      0      -  11.2M  -
root@u1604b1-m2:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 12M Apr 20 19:22 W.pdf

Lets populate one more file and take a new snapshot.

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/S.pdf /tank1
root@u1604b1-m1:~# zfs snapshot tank1@snapshot2

Incremental send

root@u1604b1-m1:~# zfs send -i tank1@snapshot1 tank1@snapshot2 | ssh root@192.168.2.29 zfs recv tank1
root@192.168.2.29's password: 

Check target

root@u1604b1-m2:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 375K Apr 20 19:27 S.pdf
-rwxr-x--- 1 root root  12M Apr 20 19:22 W.pdf

root@u1604b1-m2:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1     9K      -  11.2M  -
tank1@snapshot2      0      -  11.5M  -

Btrfs Replication

Since btrfs has send and receive capabilities I took a look at it. The title is replication but if you are interested in enterprise level sophisticated storage level replication for disaster recover or better yet mature data set cloning for non production instances you will need to look further. For example the Oracle ZFS appliance has a mature replication engine built on send and receive but handles all the replication magic for you. I am not aware of commercial solutions built on btrfs that has the mature functionality the ZFS appliance can offer yet. Note we are not just talking replication but also snapshot cloning, sharing and protection of snapshots on the target end. So for now here is what I have tested for pure btrfs send and receive.

Some details on machine 1:

root@u1604b1-m1:~# more /etc/issue
Ubuntu Xenial Xerus (development branch) \n \l

root@u1604b1-m1:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
[..]
/dev/sda1       7.5G  3.9G  3.1G  56% /
/dev/sda1       7.5G  3.9G  3.1G  56% /home
[..]

root@u1604b1-m1:~# mount
[..]
/dev/sda1 on / type btrfs (rw,relatime,space_cache,subvolid=257,subvol=/@)
/dev/sda1 on /home type btrfs (rw,relatime,space_cache,subvolid=258,subvol=/@home)
[..]

root@u1604b1-m1:~# btrfs --version
btrfs-progs v4.4

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 47 top level 5 path @
ID 258 gen 47 top level 5 path @home

Test ssh to machine 2:

root@u1604b1-m1:~# ssh root@192.168.2.29 uptime
root@192.168.2.29's password: 
 10:33:23 up 5 min,  1 user,  load average: 0.22, 0.37, 0.19

Machine 2 subvolumes before we receive:

root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 40 top level 5 path @
ID 258 gen 40 top level 5 path @home

Create a subvolume, add a file and take a snapshot:

root@u1604b1-m1:~# btrfs subvolume create /tank1
Create subvolume '//tank1'

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 53 top level 5 path @
ID 258 gen 50 top level 5 path @home
ID 264 gen 53 top level 257 path tank1

root@u1604b1-m1:~# ls /tank1

root@u1604b1-m1:~# touch /tank1/rr_test1
root@u1604b1-m1:~# ls -l /tank1/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

root@u1604b1-m1:~# btrfs subvolume snapshot /tank1 /tank1_snapshot
Create a snapshot of '/tank1' in '//tank1_snapshot'

root@u1604b1-m1:~# ls -l /tank1_snapshot/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 63 top level 5 path @
ID 258 gen 58 top level 5 path @home
ID 264 gen 59 top level 257 path tank1
ID 265 gen 59 top level 257 path tank1_snapshot

Delete a snapshot:

root@u1604b1-m1:~# btrfs subvolume delete /tank1_snapshot
Delete subvolume (no-commit): '//tank1_snapshot'

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 64 top level 5 path @
ID 258 gen 58 top level 5 path @home
ID 264 gen 59 top level 257 path tank1

Take a read-only snapshot and send to machine 2:

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot

Create a readonly snapshot of '/tank1' in '//tank1_snapshot'

root@u1604b1-m1:~# btrfs send /tank1_snapshot | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot
root@192.168.2.29's password: 
At subvol tank1_snapshot

Machine 2 after receiving snapshot1:


root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 61 top level 5 path @
ID 258 gen 60 top level 5 path @home
ID 264 gen 62 top level 257 path tank1_snapshot

root@u1604b1-m2:~# ls -l /tank1_snapshot/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

Create one more file:

root@u1604b1-m1:~# touch /tank1/rr_test2

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot2
Create a readonly snapshot of '/tank1' in '//tank1_snapshot2'

root@u1604b1-m1:~# btrfs send /tank1_snapshot2 | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot2
root@192.168.2.29's password: 
At subvol tank1_snapshot2

Machine 2 after receiving snapshot 2:

root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 65 top level 5 path @
ID 258 gen 60 top level 5 path @home
ID 264 gen 62 top level 257 path tank1_snapshot
ID 265 gen 66 top level 257 path tank1_snapshot2

root@u1604b1-m2:~# ls -l /tank1_snapshot2/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root 0 Mar 10 10:53 rr_test2

Incremental send(adding -v for now to see more detail):

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot3
Create a readonly snapshot of '/tank1' in '//tank1_snapshot3'

root@u1604b1-m1:~# btrfs send -vp /tank1_snapshot2 /tank1_snapshot3 | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot3
BTRFS_IOC_SEND returned 0
joining genl thread
root@192.168.2.29's password: 
At snapshot tank1_snapshot3

Using larger files to see effect of incremental better:

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/ISO/ubuntu-gnome-15.10-desktop-amd64.iso /tank1/
root@u1604b1-m1:~# du -sh /tank1
1.1G	/tank1

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot6
Create a readonly snapshot of '/tank1' in '//tank1_snapshot6'

root@u1604b1-m1:~# time btrfs send -vp /tank1_snapshot5 /tank1_snapshot6 | ssh root@192.168.2.29 "btrfs receive -v /"  
At subvol /tank1_snapshot6
root@192.168.2.29's password: 
receiving snapshot tank1_snapshot6 uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, ctransid=272 parent_uuid=ec3f1fb5-9bed-3e4c-9c5b-a6c586b10531, parent_ctransid=201
BTRFS_IOC_SEND returned 0
joining genl thread
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, stransid=272
At snapshot tank1_snapshot6

real	1m10.578s
user	0m0.696s
sys	0m16.064s

Machine 2 after snapshot6:

total 1.1G
-rw-r--r-- 1 root root    0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root   22 Mar 10 12:19 rr_test2
-rwxr-x--- 1 root root 1.1G Mar 10 13:04 ubuntu-gnome-15.10-desktop-amd64.iso
root@u1604b1-m1:~# cp /media/sf_E_DRIVE/ISO/ubuntu-gnome-15.10-desktop-i386.iso /tank1/

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot7
Create a readonly snapshot of '/tank1' in '//tank1_snapshot7'

root@u1604b1-m1:~# time btrfs send -vp /tank1_snapshot6 /tank1_snapshot7 | ssh root@192.168.2.29 "btrfs receive -v /" 
At subvol /tank1_snapshot7
root@192.168.2.29's password: 
receiving snapshot tank1_snapshot7 uuid=5c255311-0f60-4149-91f7-99d9d5acf64c, ctransid=276 parent_uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, parent_ctransid=272

BTRFS_IOC_SEND returned 0
joining genl thread
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=5c255311-0f60-4149-91f7-99d9d5acf64c, stransid=276
At snapshot tank1_snapshot7

real	1m17.393s
user	0m0.640s
sys	0m16.716s

Machine 2 after snapshot7:


root@u1604b1-m2:~# ls -lh /tank1_snapshot7
total 2.0G
-rw-r--r-- 1 root root    0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root   22 Mar 10 12:19 rr_test2
-rwxr-x--- 1 root root 1.1G Mar 10 13:04 ubuntu-gnome-15.10-desktop-amd64.iso
-rwxr-x--- 1 root root 1.1G Mar 10 13:07 ubuntu-gnome-15.10-desktop-i386.iso

Experiment: On the target I sent a snapshot to a new btrfs subvolume. So in effect become independent. This does not really help us with cloning since with large datasets it takes too long as well as duplicate the space which nullifies why we like COW.

root@u1604b1-m2:~# btrfs subvolume create /tank1_clone
Create subvolume '//tank1_clone'

root@u1604b1-m2:~# btrfs send /tank1_snapshot3 | btrfs receive  /tank1_clone
At subvol /tank1_snapshot3
At subvol tank1_snapshot3

This was just my initial look see on what btrfs is capable of and how similar it is to ZFS and ZFS appliance functionality.

So far at least it seems promising that send and receive is being addressed in btrfs but I don’t think you can easily roll your own solution for A) replication and B) writable snapshots(clones) with btrfs yet. There will be too much work around building the replication discipline and framework.

A few links that I came across that are useful while looking at btrfs and the topics of replication and database cloning.

1. http://rockstor.com/blog/snapshots/data-replication-with-rockstor/
2. http://blog.contractoracle.com/2013/02/oracle-database-on-btrfs-reduce-costs.html
3. http://www.cybertec.at/2015/01/forking-databases-the-art-of-copying-without-copying/
4. http://blog.contractoracle.com/2013/02/oracle-database-on-btrfs-reduce-costs.html
5. https://bdrouvot.wordpress.com/2014/04/25/reduce-resource-consumption-and-clone-in-seconds-your-oracle-virtual-environment-on-your-laptop-using-linux-containers-and-btrfs/
6. https://docs.opensvc.com/storage.btrfs.html
7. https://ilmarkerm.eu/blog/2014/08/cloning-pluggable-database-with-custom-snapshot/
8. http://blog.ronnyegner-consulting.de/2010/02/17/creating-database-clones-with-zfs-really-fast/
9. http://www.seedsofgenius.net/solaris/zfs-vs-btrfs-a-reference

ZFS Storage Appliance RESTful API

Until now I have used ssh and javascript to do some of the more advanced automation tasks like snapshots, cloning and replication. I am starting to look at porting to REST and here is a quick example of two functions.

** I needed fabric and python-requests linux packages installed for python.

#!/usr/bin/env fab
 
from fabric.api import task,hosts,settings,env
from fabric.utils import abort
import requests, json, os
from datetime import date

#requests.packages.urllib3.disable_warnings()  
today = date.today()
 
# ZFSSA API URL
url = "https://192.168.2.200:215"
 
# ZFSSA authentication credentials, it reads username and password from environment variables ZFSUSER and ZFSPASSWORD
zfsauth = (os.getenv('ZFSUSER'), os.getenv('ZFSPASSWORD'))
 
jsonheader={'Content-Type': 'application/json'}

# This gets the pool list
def list_pools():
  r = requests.get("%s/api/storage/v1/pools" % (url), auth=zfsauth, verify=False, headers=jsonheader)
  if r.status_code != 200:
    abort("Error getting pools %s %s" % (r.status_code, r.text))
  j = json.loads(r.text) 
  #print j

  for pool in j["pools"]:
    #print pool
    #{u'status': u'online', u'profile': u'stripe', u'name': u'tank1', u'owner': u'zfsapp1', u'usage': {}, u'href': u'/api/storage/v1/pools/tank1', u'peer': u'00000000-0000-0000-0000-000000000000', u'asn': u'91bdcaef-fea5-e796-8793-f2eefa46200a'}ation
    print "pool: %s and status: %s" % (pool["name"], pool["status"])


# Create project
def create_project(pool, projname):
  # First check if the target project name already exists
  r = requests.get("%s/api/storage/v1/pools/%s/projects/%s" % (url, pool, projname), auth=zfsauth, verify=False, headers=jsonheader)
  if r.status_code != 404:
    abort("ZFS project %s already exists (or other error): %s" % (projname, r.status_code))

  payload = { 'name': projname, 'sharenfs': 'ro' }
  r = requests.post("%s/api/storage/v1/pools/%s/projects" % (url, pool), auth=zfsauth, verify=False, data=json.dumps(payload), headers=jsonheader)
  if r.status_code == 201:
    print "project created"
  else:
    abort("Error creating project %s %s" % (r.status_code, r.text))

print "\n\nTest list pools and create a project\n"
list_pools()
create_project('tank1','proj-01')

References:
http://www.oracle.com/technetwork/articles/servers-storage-admin/zfs-appliance-scripting-1508184.html
http://docs.oracle.com/cd/E51475_01/html/E52433/
http://ilmarkerm.blogspot.com/2014/12/sample-code-using-oracle-zfs-storage.html

Expanding a Solaris RPOOL

For reference I have a couple older articles on this topic here:

Growing a Solaris LDOM rpool

ZFS Grow rpool disk

Specifically this article is what I did recently on a SPARC LDOM to expand the RPOOL. The RPOOL OS disk is a SAN shared LUN in this case.

After growing the LUN to 50G on the back-end I did the following. You may have to try more than once. For me it did not work at first and I don’t know the sequence but I tried a combination of reboot, zpool status, label and verify and it worked. And yes I did say zpool status. I have had issues with upgrades in the past where beadm did not activate a new environment and zpool status resolved it.

Also you will notice my boot partition was already an EFI label. I don’t recall where but somewhere along the lines in Solaris 11.1 EFI labels became possible. If you have a SMI label you may have to try a different approach. And as always tinkering with partitions and disk labels is dangerous so you are warned.

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  29.8G  27.6G  2.13G  92%  1.00x  ONLINE  -

# format -e
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c1d0 <SUN-ZFS Storage 7330-1.0-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c1d1 <Unknown-Unknown-0001-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@1
Specify disk (enter its number): 0
selecting c1d0
[disk formatted]
/dev/dsk/c1d0s0 is part of active ZFS pool rpool. Please see zpool(1M).

[..]

format> verify

Volume name = <        >
ascii name  = <SUN-ZFS Storage 7330-1.0-50.00GB>
bytes/sector    =  512
sectors = 104857599
accessible sectors = 104857566
Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256       49.99GB          104841182
  1 unassigned    wm                 0           0               0
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  7 unassigned    wm                 0           0               0
  8   reserved    wm         104841183        8.00MB          104857566

[..]

format> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]:
Ready to label disk, continue? y

format> q

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  29.8G  27.6G  2.13G  92%  1.00x  ONLINE  -

# zpool set autoexpand=on rpool

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  49.8G  27.6G  22.1G  55%  1.00x  ONLINE  -

Ubuntu On a ZFS Root File System for Ubuntu 15.04

Start Update 03.18.15:
This is untested but I suspect if you are upgrading the kernel to 3.19.0 and you have issues you may need to change to the daily Vivid ppa. In my initial post I used stable and Utopic since Vivid was very new.
End Update 03.18.15:

This is what I did to make an Ubuntu 15.04 virtualbox guest (works for Ubuntu 14.10 also) boot with the ZFS file system.

Previous articles:

Ubuntu On a ZFS Root File System for Ubuntu 14.04

Booting Ubuntu on a ZFS Root File System

SYSTEM REQUIREMENTS
64-bit Ubuntu Live CD. (Not the alternate or 32-bit installer)
AMD64 or EM64T compatible computer. (ie: x86-64)
15GB disk, 2GB memory minimum, Virtualbox
Create new VM. I use bridged networking in case I want to use ssh during the setup.
Start the Ubuntu LiveCD and open a terminal at the desktop. I used the 15.04 64-bit alpha CD.
Control-F1 to first text terminal.

1. Setup repo.

$ sudo -i
# /etc/init.d/lightdm stop
# apt-add-repository --yes ppa:zfs-native/stable

** Change /etc/apt/sources.list.d/… down to trusty. Or utopic possibly. I did not test.

2. Install zfs.

# apt-get update
# apt-get install debootstrap ubuntu-zfs
# dmesg | grep ZFS:
[ 3900.114234] ZFS: Loaded module v0.6.3-4~trusty, ZFS pool version 5000, ZFS filesystem version 5

** takes a long time to compile initial module for 3.16.0-28-generic

3. Install ssh.

Using a ssh terminal makes it easier to copy and paste for both command execution and documentation. However with this bare bones environment at this point openssh might not install very clean. I played with it a little to get at least sshd to run.

# apt-get install ssh
# /etc/init.d/ssh start
# /usr/sbin/sshd

** check with ps if ssh process is running
** edit sshd_config and allow root login
** set root passwd

4. Setup disk partitions.

# fdisk -l
Disk /dev/loop0: 1 GiB, 1103351808 bytes, 2154984 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc2c7def9

Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 411647 409600 200M be Solaris boot
/dev/sda2 411648 31457279 31045632 14.8G bf Solaris

5. Format partitions.

# mke2fs -m 0 -L /boot/grub -j /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part1
# zpool create -o ashift=9 rpool /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2

6. ZFS Setup and Mountpoints.

# zpool create -o ashift=9 rpool /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2
# zfs create rpool/ROOT
# zfs create rpool/ROOT/ubuntu-1
# zfs umount -a
# zfs set mountpoint=/ rpool/ROOT/ubuntu-1
# zpool set bootfs=rpool/ROOT/ubuntu-1 rpool
# zpool export rpool
# zpool import -d /dev/disk/by-id -R /mnt rpool
# mkdir -p /mnt/boot/grub
# mount /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part1 /mnt/boot/grub

7. Install Ubuntu 15.04 on /mnt

# debootstrap vivid /mnt
I: Retrieving Release
...
I: Base system installed successfully.

# cp /etc/hostname /mnt/etc/
# cp /etc/hosts /mnt/etc/
# vi /mnt/etc/fstab
# cat /mnt/etc/fstab
/dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part1 /boot/grub auto defaults 0 1

# cat /mnt/etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

8. Setup chroot to update and install ubuntu-minimal

# mount --bind /dev /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys /mnt/sys
# chroot /mnt /bin/bash --login
# locale-gen en_US.UTF-8

# apt-get update
# apt-get install ubuntu-minimal software-properties-common

9. Setup ZOL repo

# apt-add-repository --yes ppa:zfs-native/stable

** leave grub repo off for now.

# cat /etc/apt/sources.list.d/zfs-native-ubuntu-stable-vivid.list
deb http://ppa.launchpad.net/zfs-native/stable/ubuntu trusty main
# deb-src http://ppa.launchpad.net/zfs-native/stable/ubuntu vivid main
# apt-get update
# apt-get install --no-install-recommends linux-image-generic linux-headers-generic

# apt-get install ubuntu-zfs

** skipped grub stuff for this pass

# apt-get install zfs-initramfs
# apt-get dist-upgrade

10. Make sure root has access

# passwd root

11. Test grub

# grub-probe /
bash: grub-probe: command not found

12. Use older patched grub from ZOL project

# apt-add-repository --yes ppa:zfs-native/grub
gpg: keyring `/tmp/tmp5urr4u7g/secring.gpg' created
gpg: keyring `/tmp/tmp5urr4u7g/pubring.gpg' created
gpg: requesting key F6B0FC61 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmp5urr4u7g/trustdb.gpg: trustdb created
gpg: key F6B0FC61: public key "Launchpad PPA for Native ZFS for Linux" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK

# cat /etc/apt/sources.list.d/zfs-native-ubuntu-grub-vivid.list
deb http://ppa.launchpad.net/zfs-native/grub/ubuntu raring main

# apt-get install grub2-common grub-pc
Installation finished. No error reported.
/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2'.

** As you can see grub has issues with dev path.

# ln -s /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2 /dev/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52-part2
# apt-get install grub2-common grub-pc

# grub-probe /
zfs
# ls /boot/grub/i386-pc/zfs*
/boot/grub/i386-pc/zfscrypt.mod /boot/grub/i386-pc/zfsinfo.mod /boot/grub/i386-pc/zfs.mod

** Note at the end I show a udev rule that can help work around this path issue.

# update-initramfs -c -k all

# grep "boot=zfs" /boot/grub/grub.cfg
linux /ROOT/ubuntu-1@/boot/vmlinuz-3.16.0-28-generic root=ZFS=rpool/ROOT/ubuntu-1 ro boot=zfs quiet splash $vt_handoff

# grep "boot=zfs" /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash boot=zfs"

# update-grub
Generating grub configuration file ...
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-3.16.0-28-generic
Found initrd image: /boot/initrd.img-3.16.0-28-generic
done

# grub-install $(readlink -f /dev/disk/by-id/ata-VBOX_HARDDISK_VBf3c0d5ba-e6881c52)
Installing for i386-pc platform.
Installation finished. No error reported.

# exit
logout

13. unmount chroot and shutdown

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool
# init 0

14. Cleanup and finish
** Create snapshot
** bootup
** install ssh and configure root to login in sshd_config, restart ssh

15. udev rule for grub bug

# cat /etc/udev/rules.d/70-zfs-grub-fix.rules
ENV{DEVTYPE}=="partition", IMPORT{parent}="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL} $env{ID_BUS}-$env{ID_SERIAL}-part%n"

# /etc/init.d/udev restart

16. install desktop software

# apt-get install ubuntu-desktop

ZFSSA List Snapshots Script

Quick script to illustrate interacting with the ZFS Storage Appliance. In this example I am listing ZFSSA snapshots containing a search string.  Note I edited this for the article without re-testing it still works.

#!/bin/sh

Usage() {
 echo "$1 -u <Appliance user> -h <appliance> -j <project> -p <pool> -s <containsString>"
 exit 1
}

PROG=$0
while getopts u:h:s:j:p flag
do
  case "$flag" in
  p) pool="$OPTARG";;
  j) project="$OPTARG";;
  s) string="$OPTARG";;
  u) user="$OPTARG";;
  h) appliance="$OPTARG";;
  \?) Usage $PROG ;;
  esac
done

[ -z "$pool" -o -z "$project" -o -z "$appliance" -o -z "$user" ] && Usage $PROG

ssh -T $user@$appliance << EOF
script
var MyArguments = {
  pool: '$pool',
  project: '$project',
  string: '$string'
}

function ListSnapshotsbyS (Arg) {
  run('cd /');                          // Make sure we are at root child context level
  run('shares');
  try {
      run('set pool=' + Arg.pool);
  } catch (err) {
      printf ('ERROR: %s\n',err);
      return (err);
  }

  var allSnaps=[];
  try {
      run('select ' + Arg.project + ' snapshots');
      snapshots=list();
      for(i=0; i < snapshots.length; i++) {
          allSnaps.push(snapshots[i]);
      }
        run('done');
  } catch (err) {
      printf ('ERROR: %s\n',err);
            return(err);
  }

  for(i=0; i < allSnaps.length; i++) {
   if (Arg.string !="") {
    var idx=allSnaps[i].indexOf(Arg.string);
    if (idx>0) {
      printf('#%i: %s contained search string %s \n',i,allSnaps[i], Arg.string);
    }
   } else {
      printf('#%i: %s \n',i,allSnaps[i]);
   }
  }
  return(0);
}
ListSnapshotsbyS(MyArguments);
.
EOF

Ubuntu On a ZFS Root File System for Ubuntu 14.04

This is an update post to making an Ubuntu 14.04 (Trusty Tahr) OS work with ZFS root volume. Mostly the instructions remains the same as a previous post so this is a shortened version:

Booting Ubuntu on a ZFS Root File System

Small warning I did this 4 times. It worked the first time but of course I did not document it well the first time and when I tried again I had grub issues.

Step 1:

$ sudo -i
# apt-add-repository --yes ppa:zfs-native/stable

** Don’t need grub ppa as per github instructions???

# apt-get update
# apt-get install debootstrap ubuntu-zfs

** Will take quite a while kernel modules compiles!!

# modprobe zfs
# dmesg | grep ZFS:
[ 1327.346821] ZFS: Loaded module v0.6.2-2~trusty, ZFS pool version 5000, ZFS filesystem version 5

Step 2:

# ls /dev/disk/by-id
ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419
ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1
ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2

# fdisk /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419

** Make partitions as follow

# fdisk -l /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419
                                                 Device Boot      Start         End      Blocks   Id  System
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1   *        2048      206847      102400   be  Solaris boot
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2          206848    16777215     8285184   bf  Solaris

Step 3:

# mke2fs -m 0 -L /boot/grub -j /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1
# zpool create -o ashift=9 rpool /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2

# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  7.88G   117K  7.87G     0%  1.00x  ONLINE  -

# zfs create rpool/ROOT
# zfs create rpool/ROOT/ubuntu-1
# zfs umount -a
# zfs set mountpoint=/ rpool/ROOT/ubuntu-1
# zpool export rpool

Step 4:

# zpool import -d /dev/disk/by-id -R /mnt rpool
# mkdir -p /mnt/boot/grub
# mount /dev/disk/by-id/scsi-SATA_disk1-part1 /mnt/boot/grub
# debootstrap trusty /mnt

WTF: ** System seems hung. I see on a different terminal there was a messages system restart required. Weird. If you get this after debootstrap you have to redo Step 1 and Step 4.1 then…Is this because of only 2G RAM?

Step 5:

# cp /etc/hostname /mnt/etc/
# cp /etc/hosts /mnt/etc/
# tail -1 /mnt/etc/fstab
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1  /boot/grub  auto  defaults  0  1

# mount --bind /dev  /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys  /mnt/sys
# chroot /mnt /bin/bash --login

# locale-gen en_US.UTF-8
# apt-get update
# apt-get install ubuntu-minimal software-properties-common

# apt-add-repository --yes ppa:zfs-native/stable
# apt-add-repository --yes ppa:zfs-native/grub < - See below note on this command
# apt-get update
# apt-get install --no-install-recommends linux-image-generic linux-headers-generic
# apt-get install ubuntu-zfs
# apt-get install grub2-common grub-pc

Quick note on grub issues.  During the install I had to create soft links since I could not figure out the grub-probe failures.  From memory I think I created soft links as follow and purged grub2-common grub-pc and re-installed:

/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419 >>>> /dev/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1 >>>> /dev/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part1
/dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2 >>>> /dev/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419-part2

Update 5.6.14:  After I had time to look at it closer I see my grub issues all came from the fact that there is no trusty grub ppa and the apt-add-repository command above is setting up a trusty repo.  Quickest way to fix this is after the apt-add-repository –yes ppa:zfs-native/grub command fix the file manually to use raring. As follow:

# more /etc/apt/sources.list.d/zfs-native-grub-trusty.list
deb http://ppa.launchpad.net/zfs-native/grub/ubuntu raring main

Now ready to continue on.

# apt-get install zfs-initramfs
# apt-get dist-upgrade
# passwd root

Step 6:

# grub-probe /
zfs
# ls /boot/grub/i386-pc/zfs*
/boot/grub/i386-pc/zfs.mod  /boot/grub/i386-pc/zfsinfo.mod

# update-initramfs -c -k all
update-initramfs: Generating /boot/initrd.img-3.13.0-24-generic

# grep "boot=zfs" /boot/grub/grub.cfg
	linux	/ROOT/ubuntu-1@/boot/vmlinuz-3.13.0-24-generic root=ZFS=rpool/ROOT/ubuntu-1 ro  boot=zfs
		linux	/ROOT/ubuntu-1@/boot/vmlinuz-3.13.0-24-generic root=ZFS=rpool/ROOT/ubuntu-1 ro  boot=zfs

# grep zfs /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="boot=zfs"

# update-grub

# grub-install $(readlink -f /dev/disk/by-id/ata-VBOX_HARDDISK_VBb4fe25f7-8f14d419)
Installation finished. No error reported.

# exit

Step 7:

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool
# reboot

Post First Reboot:
– Made a VB snapshot of course
– apt-get install ubuntu-desktop
** grub issues again so I remade the link again. Later fixed with the grub ppa repo pointing to raring instead.
– create a user
– install VB Guest Additions

TODO:
– Check into grub issue and having to create soft links. Something to do with grub not following soft links.

SUN Oracle ZFS Storage Simulator

Previously I wrote an article on getting the ZFS simulator to run on OVM.

Until recently I did not realize that I could upgrade the ZFS simulator on Virtualbox.  I kind of assumed the appliance is checking in the background and showing possible upgrades in the Available Updates page.  None of my simulators or real ZFS appliances was showing new Available updates either.  So here is what I did to update the simulator.  I assume it will work with the OVM ported version also.

If you go to this page https://wikis.oracle.com/display/fishworks/Software+Updates you can see what updates are available for your hardware or simulator.   Then updating is easy.  Just download the zip file you need.  Read the Release Notes.  Then uncompress the file where you are staging.  In the Maintenance > System screen click the plus sign next to Available Updates.  Find the .gz file in the folder structure and upload the image.  Follow the questions.

My simulator running under Virtualbox now shows the below version.  Note that the simulator was too far behind to skip to the latest version so I had to do an extra 2011.04.24 version also.