Author Archive

Feb 21

Ubuntu server 20.04 zfs root and OCI

My experiment to:

  • create an Ubuntu 20.04 (not final release as of Feb 20) server in Virtualbox
  • setup server with a ZFS root disk
  • enable serial console
  • Virtualbox export to OCI

As you probably know newer desktop versions of Ubuntu will offer ZFS for root volume during installation. I am not sure if that is true in Ubuntu 20.04 server installs and when I looked at the 19.10 ZFS installation I did not necessarily want to use the ZFS layout they did. My experiment is my custom case and also tested on Ubuntu 16.04 and 18.04.

Note this is an experiment and ZFS layout, boot partition type, LUKS, EFI, multiple boot disks(mirrored) and netplan are all debatable configurations. Mine may not be ideal but it works for my use case.

Also my goal here was to export a bootable/usable OCI (Oracle Cloud Infrastructure) compute instance.

Start by booting a recent desktop live CD. Since I am testing 20.04 (focal) I used that. In the live cd environment open a terminal, sudo and apt install ssh. Start the ssh service and set ubuntu user password.

$ ssh ubuntu@192.168.1.142
$ sudo -i

##
apt-add-repository universe
apt update
apt install --yes debootstrap gdisk zfs-initramfs

## find correct device name for below
DISK=/dev/disk/by-id/ata-VBOX_HARDDISK_VB26c080f2-2bd16227
USER=ubuntu
HOST=server
POOL=ubuntu

##
sgdisk --zap-all $DISK
sgdisk --zap-all $DISK
sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
sgdisk     -n3:0:+1G      -t3:BF01 $DISK
sgdisk     -n4:0:0        -t4:BF01 $DISK
sgdisk --print $DISK

##
zpool create -o ashift=12 -d \
    -o feature@async_destroy=enabled \
    -o feature@bookmarks=enabled \
    -o feature@embedded_data=enabled \
    -o feature@empty_bpobj=enabled \
    -o feature@enabled_txg=enabled \
    -o feature@extensible_dataset=enabled \
    -o feature@filesystem_limits=enabled \
    -o feature@hole_birth=enabled \
    -o feature@large_blocks=enabled \
    -o feature@lz4_compress=enabled \
    -o feature@spacemap_histogram=enabled \
    -o feature@userobj_accounting=enabled \
    -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
    -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/ -R /mnt bpool ${DISK}-part3

zpool create -o ashift=12 \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/ -R /mnt rpool ${DISK}-part4

zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT

zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu

zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu
zfs mount bpool/BOOT/ubuntu

## Note: I skipped creating datasets for home, root, var/lib/ /var/log etc etc

##
debootstrap focal /mnt
zfs set devices=off rpool

## 
cat > /mnt/etc/netplan/01-netcfg.yaml<< EOF
network:
  version: 2
  ethernets:
    enp0s3:
      dhcp4: true
EOF

##
cat > /mnt/etc/apt/sources.list<< EOF
deb http://archive.ubuntu.com/ubuntu focal main universe
EOF

##
mount --rbind /dev  /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys  /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK bash --login

##
locale-gen --purge en_US.UTF-8
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales
echo US/Central > /etc/timezone    
dpkg-reconfigure -f noninteractive tzdata

##
passwd

##
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes grub-pc
grub-probe /boot

##update-initramfs -u -k all  <- this does not work. try below 
KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

# edit /etc/default/grub
GRUB_DEFAULT=0
#GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=5
GRUB_CMDLINE_LINUX_DEFAULT=
GRUB_CMDLINE_LINUX=root=ZFS=rpool/ROOT/ubuntu console=tty1 console=ttyS0,115200
GRUB_TERMINAL=serial console
GRUB_SERIAL_COMMAND=serial --unit=0 --speed=115200

##
update-grub
grub-install $DISK

##
cat > /etc/systemd/system/zfs-import-bpool.service<< EOF
[Unit]
  DefaultDependencies=no
  Before=zfs-import-scan.service
  Before=zfs-import-cache.service

[Service]
  Type=oneshot
  RemainAfterExit=yes
  ExecStart=/sbin/zpool import -N -o cachefile=none bpool

[Install]
  WantedBy=zfs-import.target
EOF

systemctl enable zfs-import-bpool.service

##
zfs set mountpoint=legacy bpool/BOOT/ubuntu
echo bpool/BOOT/ubuntu /boot zfs \
    nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
zfs snapshot bpool/BOOT/ubuntu@install
zfs snapshot rpool/ROOT/ubuntu@install

##
systemctl enable serial-getty@ttyS0
apt install ssh
systemctl enable ssh

##
exit
##
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

** reboot

** detach live cd

NOTE: grub had a prompt on reboot but no options. try below

KERNEL=ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'
update-initramfs -u -k $KERNEL
update-grub

REF: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS

Comments Off on Ubuntu server 20.04 zfs root and OCI
comments

Jan 03

Regex Search Lines Add Quotes

Regex to search a whole file and if the line contains the search word, add beginning and ending quotes. I in this case doing it vi(m).

This works to add beginning and ending quotes to ALL lines in the file

**:%s/^(.*)$/\\1/**

or simpler

**:%s/.*/&**

Explanation:

By default, a pattern is interpreted as the largest possible match, so .* is interpreted as the whole line, with no need for ^ and $. And while (…) can be useful in selecting a part of a pattern, it is not needed for the whole pattern, which is represented by & in the substitution. And the final / in a search or substitution is not needed unless something else follows

However we do NOT want all lines in the file only the lines containing the search word.

This works. I am not sure if this is reliable without using anchors ^ and $ but it seems to work in my limited test.

**:%s/.*mysearchword.*/&**

Comments Off on Regex Search Lines Add Quotes
comments

Dec 14

Formulas for bytes and duration

Capturing a couple formulas for future reference. I was using restic backup stats and wanted to convert the time a job ran into hours and minutes. Also convert bytes process to TB.

bytes to TB example:
total_bytes_processed: 5773378 / 1000000000000 == 1.502888851889

duration to HH:MM example using libre calc:
total_duration: 15870.197288027 / 86400 and then Format Cell > Time > HH:MM:SS will show 04:24:30

Comments Off on Formulas for bytes and duration
comments

Dec 14

Bash Read Json Config File

Couple of things here:

  • I wanted to do some restic scripts
  • At the same time use a configuration file. The restic developers is working on this functionality for restic and possibly using TOML.

Meanwhile I was trying json since I can definitely use bash/json for other applications. And as you know bash is not great at this kind of thing specifically arrays etc. So this example reads a configuration file and process the json. To further complicate things my json typically need arrays or lists of values as in the restic example you can see for folders, excludes and tags.

You will also note a unique problem with bash. When using while loops with a pipe into the while a subshell is used and you can\'t use variable in your main shell. So my appending to a variable inside the while loop does not produce any strings. In bash 4.2 you can use shopt -s latpipe to get around this. Apparently this is not a problem with ksh.

This is not a working restic script. This is a script to read a configuration file. It just happen to be for something I am going to do with restic.

Example json config file.

$ cat restic-jobs.json 
{ Jobs:
  [
   {
    jobname: aws-s3,
    repo: sftp:myuser@192.168.1.112:/TANK/RESTIC-REPO,
    sets:
      [
       {
        folders: [ /DATA ],
        excludes: [ .snapshots,temp],
        tags: [ data,biz ]
       },
       {
        folders: [ /ARCHIVE ],
        excludes: [ .snapshots,temp],
        tags: [ archive,biz ]
       }
      ],
      quiet: true
    },
    {
     jobname: azure-onedrive,
     repo:  rclone:azure-onedrive:restic-backups,
     sets:
       [
       {
        folders: [ /DATA ],
        excludes: [ .snapshots,temp],
        tags: [ data,biz ]
       },
       {
        folders: [ /ARCHIVE ],
        excludes: [ .snapshots,temp],
        tags: [ archive,biz ]
       }
      ],
     quiet: true
    }
  ]
} 

Script details.

$ cat restic-jobs.sh 
#!/bin/bash
#v0.9.1

JOB=aws-s3
eval $(jq --arg JOB ${JOB} -r '.Jobs[] | select(.jobname==$JOB) | del(.sets) | to_entries[] | .key + =\ + .value + \' restic-jobs.json)
if [[ $jobname ==  ]]; then
  echo no job found in config:  $JOB
  exit
fi

echo found: $jobname

#sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB) | .sets | .[]' restic-jobs.json )

echo

sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB)' restic-jobs.json)

backup_jobs=()
## need this for bash issue with variables and pipe subshell
shopt -s lastpipe

echo $sets | jq -rc '.sets[]' | while IFS='' read set;do
    cmd_line=restic backup -q --json 

    folders=$(echo $set | jq -r '.folders | .[]')
    for st in $folders; do cmd_line+= $st; done
    excludes=$(echo $set | jq -r '.excludes | .[]')
    for st in $excludes; do cmd_line+= --exclude $st; done
    tags=$(echo $set | jq -r '.tags | .[]')
    for st in $tags; do cmd_line+= --tag $st; done

    backup_jobs+=($cmd_line)
done

for i in ${backup_jobs[@]}; do
  echo cmd_line: $i
done

Script run example. Note I am not passing the job name just hard code at the top for my test.

$ ./restic-jobs.sh 
found: iqonda-aws-s3

cmd_line: restic backup -q --json  /DATA --exclude .snapshots --exclude temp --tag iqonda --tag biz
cmd_line: restic backup -q --json  /ARCHIVE --exclude .snapshots --exclude temp --tag iqonda --tag biz

Comments Off on Bash Read Json Config File
comments

Nov 23

Bash array json restic snapshots and jq

As you know bash is not ideal with multi arrays. Frequently I find myself wanting to read something like json into bash and loop over it. There are many ways to do this including readarray etc. I found this to work best for me. Note json can have lists so I collapse those with jq\'s join. Example:

# cat restic-loop-snaps.sh
#!/bin/bash

function loopOverArray(){

   restic snapshots --json | jq -r '.' | jq -c '.[]'| while read i; do
    id=$(echo $i | jq -r '.| .short_id')
    ctime=$(echo $i | jq -r '.| .time')
    hostname=$(echo $i | jq -r '.| .hostname')
    paths=$(echo $i | jq -r '. | .paths | join(,)')
    tagss=$(echo $i | jq -r '. | .tags | join(,)')
    printf %-10s - %-40s - %-20s - %-30s - %-20s\n $id $ctime $hostname $paths $tags
    done
}

loopOverArray

Run

# ./restic-loop-snaps.sh
0a71b5d4   - 2019-05-31T05:03:20.655922639-05:00      - pop-os               - /DATA/MyWorkDocs        -                     
...

Comments Off on Bash array json restic snapshots and jq
comments

Nov 23

restic set tags

In a follow up to previous post https://blog.ls-al.com/restic-create-backup-and-set-tag-with-date-logic here is some code I used to set tags on old snapshots to comply with my new tagging and pruning.

# cat backup-tags-set.sh
#!/bin/bash

create_tag () {
  tag=daily
  if [ $(date -d $1 +%a) == Sun ]; then tag=weekly ; fi
  if [ $(date -d $1 +%d) == 01 ]; then 
   tag=monthly
   if [ $(date -d $1 +%b) == Jan ]; then
     tag=yearly
   fi
  fi
}
create_tag
echo backup policy:  $tag

#source /root/.restic.env
snapshotids=$(restic snapshots -c | egrep -v ID|snapshots|-- | awk '//{print $1;}')
for snapshotid in $snapshotids
do
  snapdate=$(restic snapshots $snapshotid -c | egrep -v ID|snapshots|-- | awk '//{print $2;}')
  create_tag $snapdate
  echo Making a tag for: $snapshotid - $snapdate - $(date -d $snapdate +%a) - $tag
  restic tag --set $tag $snapshotid
done

Run

# ./backup-tags-set.sh 
backup policy:  daily
Making a tag for: 0b88eefa - 2019-03-27 - Wed - daily
repository 00cde088 opened successfully, password is correct
create exclusive lock for repository
modified tags on 1 snapshots
Making a tag for: d76811ac - 2019-03-27 - Wed - daily
repository 00cde088 opened successfully, password is correct
create exclusive lock for repository
modified tags on 1 snapshots

Comments Off on restic set tags
comments

Nov 23

restic option to configure S3 region

If you find yourself relying on restic using rclone to talk to non-default regions you may want to check out the just released restic version 0.9.6. To me it appears to be fixed when working with Oracle Cloud Infrastructure (OCI) object storage. Below shows a test accessing Phoenix endpoint with the new -o option.

# restic -r s3:<tenancy_name>.compat.objectstorage.us-phoenix-1.oraclecloud.com/restic-backups snapshots -o s3.region=us-phoenix-1
repository <....> opened successfully, password is correct
ID        Time                 Host                          Tags        Paths
----------------------------------------------------------------------------------------
f23784fd  2019-10-27 05:10:02  host01.domain.com  mytag     /etc

Comments Off on restic option to configure S3 region
comments

Nov 21

Restic create backup and set tag with date logic

Also see previous post https://blog.ls-al.com/bash-date-usage-for-naming if you are interested. This post is similar but more specific to restic tagging.

Below is a test script and a test run. At the time of restic backup I create a tag in order to do snapshot forget based on tags.

# cat backup-tags.sh
#!/bin/bash

create_tag () {
  tag=daily
  if [ $(date +%a) == Sun ]; then tag=weekly ; fi
  if [ $(date +%d) == 01 ]; then 
   tag=monthly
   if [ $(date +%b) == Jan ]; then
     tag=yearly
   fi
  fi
}
create_tag
echo backup policy:  $tag

create_tag_unit_test () {
  for i in {1..95}
  do 
      tdate=$(date -d +$i day)
      tag=daily
      if [ $(date -d +$i day +%a) == Sun ]; then tag=weekly ; fi
      if [ $(date -d +$i day +%d) == 01 ]; then
      tag=monthly
        if [ $(date -d +$i day +%b) == Jan ]; then
          tag=yearly
        fi
      fi
  printf %s - %s - %s |  $(date -d +$i day +%d) $(date -d +$i day +%a) $tag 
  if [ $(( $i %5 )) -eq 0 ]; then printf \n; fi
  done
}
create_tag_unit_test

Run

# ./backup-tags.sh
backup policy:  daily
22 - Fri - daily      | 23 - Sat - daily      | 24 - Sun - weekly     | 25 - Mon - daily      | 26 - Tue - daily      | 
27 - Wed - daily      | 28 - Thu - daily      | 29 - Fri - daily      | 30 - Sat - daily      | 01 - Sun - monthly    | 
02 - Mon - daily      | 03 - Tue - daily      | 04 - Wed - daily      | 05 - Thu - daily      | 06 - Fri - daily      | 
07 - Sat - daily      | 08 - Sun - weekly     | 09 - Mon - daily      | 10 - Tue - daily      | 11 - Wed - daily      | 
12 - Thu - daily      | 13 - Fri - daily      | 14 - Sat - daily      | 15 - Sun - weekly     | 16 - Mon - daily      | 
17 - Tue - daily      | 18 - Wed - daily      | 19 - Thu - daily      | 20 - Fri - daily      | 21 - Sat - daily      | 
22 - Sun - weekly     | 23 - Mon - daily      | 24 - Tue - daily      | 25 - Wed - daily      | 26 - Thu - daily      | 
27 - Fri - daily      | 28 - Sat - daily      | 29 - Sun - weekly     | 30 - Mon - daily      | 31 - Tue - daily      | 
01 - Wed - yearly     | 02 - Thu - daily      | 03 - Fri - daily      | 04 - Sat - daily      | 05 - Sun - weekly     | 
06 - Mon - daily      | 07 - Tue - daily      | 08 - Wed - daily      | 09 - Thu - daily      | 10 - Fri - daily      | 
11 - Sat - daily      | 12 - Sun - weekly     | 13 - Mon - daily      | 14 - Tue - daily      | 15 - Wed - daily      | 
16 - Thu - daily      | 17 - Fri - daily      | 18 - Sat - daily      | 19 - Sun - weekly     | 20 - Mon - daily      | 

Below is the restic backup script setting a tag and then snapshot forget based on the tag.

As always this is NOT tested use at your own risk.

My policy is:

  • weekly on Sunday
  • 01 of every month is a monthly except if 01 is also a new year which makes it a yearly
  • everything else is a daily
# cat desktop-restic.sh 
#!/bin/bash
### wake up backup server and restic backup to 3TB ZFS mirror
cd /root/scripts
./wake-backup-server.sh

source /root/.restic.env

## Quick and dirty logic for snapshot tagging
create_tag () {
  tag=daily
  if [ $(date +%a) == Sun ]; then tag=weekly ; fi
  if [ $(date +%d) == 01 ]; then
   tag=monthly
   if [ $(date +%b) == Jan ]; then
     tag=yearly
   fi
  fi
}

create_tag
restic backup -q /DATA /ARCHIVE --tag $tag --exclude *.vdi --exclude *.iso --exclude *.ova --exclude *.img --exclude *.vmdk

restic forget -q --tag daily --keep-last 7
restic forget -q --tag weekly --keep-last 4
restic forget -q --tag monthly --keep-last 12

if [ $tag == weekly ]; then
  restic -q prune
fi

sleep 1m
ssh user@192.168.1.250 sudo shutdown now

Comments Off on Restic create backup and set tag with date logic
comments

Nov 16

AWS Cloudwatch Cron

I was trying to schedule a once a week snapshot of a EBS volume and getting "Parameter ScheduleExpression is not valid". Turns out I missed something small. If you schedule using a cron expression note this important requirement: One of the day-of-month or day-of-week values must be a question mark (?)

I was trying:

0 1 * * SUN *

What worked was:

0 1 ? * SUN *

Comments Off on AWS Cloudwatch Cron
comments

Nov 01

Oracle OCI CLI Query

Some bash snippets of using --query, jq and interacting with Bash to manipulate into variables.

Collect boot volume's id

SRCBOOTVOLID=$(oci --profile $profile bv boot-volume list --compartment-id "$source_compartment" --availability-domain "$source_ad" --query "data [?\"display-name\" == '$instance_name (Boot Volume)'].{id:id}" | jq -r '.[] | .id')

Collect instance ocid

INSTANCEID=$(oci --profile $profile compute instance launch --availability-domain $target_ad --compartment-id $sandbox_compartment --shape VM.Standard1.1 --display-name "burner-$instance_name-instance-for-custom-image" --source-boot-volume-id $BOOTVOLID --wait-for-state RUNNING --subnet-id $sandbox_subnetid --query "data .{id:id}" | jq -r '. | .id')

Stop instance and collect the id (or whatever you need from the json)

STOPPEDID=$(oci --profile $profile compute instance action --action STOP --instance-id $INSTANCEID --wait-for-state STOPPED --query "data .{id:id}" | jq -r '. | .id')

Collect the work-request-id to monitor in a loop after I export a custom image to object storage. Note in the query the field I need is NOT in the data section.

WORKREQUESTID=$(oci --profile $profile compute image export to-object --image-id $IMAGEID --namespace faketenancy --bucket-name DR-Images --name $today-$instance_name-custom-image-object --query '"opc-work-request-id"' --raw-output)

while [ "$RESULT" != "SUCCEEDED" ]
do
  RESULT=$(oci --profile myprofile work-requests work-request get --work-request-id $WORKREQUESTID --query "data .{status:status}" | jq -r '. | .status')
  echo "running export job and $RESULT checking every 2 mins"
  sleep 2m
done

Comments Off on Oracle OCI CLI Query
comments