Hashicorp Vault Test

Recording a quick test of Vault.

## hashicorp vault: https://www.vaultproject.io
## download vault executable and move to /usr/sbin so we have it in the path for this test. should rather be in /usr/local/bin

$ vault -autocomplete-install
$ exec $SHELL

$ vault server -dev
==> Vault server configuration:

             Api Address: http://127.0.0.1:8200
                     Cgo: disabled
         Cluster Address: https://127.0.0.1:8201
              Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: inmem
                 Version: Vault v1.3.4

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.
...

## new terminal 
$ export VAULT_ADDR='http://127.0.0.1:8200'
$ export VAULT_DEV_ROOT_TOKEN_ID="<...>"

$ vault status
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.3.4
Cluster Name    vault-cluster-f802bf67
Cluster ID      aa5c7006-9c7c-c394-f1f4-1a9dafc17688
HA Enabled      false

$ vault kv put secret/awscreds-iqonda {AWS_SECRET_ACCESS_KEY=<...>,AWS_ACCESS_KEY_ID=<...>}
Key              Value
---              -----
created_time     2020-03-20T18:58:57.461120823Z
deletion_time    n/a
destroyed        false
version          4

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"]'
{
  "AWS_ACCESS_KEY_ID": "<...>",
  "AWS_SECRET_ACCESS_KEY": "<...>"
}

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"] | .AWS_ACCESS_KEY_ID'
<...>

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"] | .AWS_SECRET_ACCESS_KEY'
<...>

Using AWS CLI Docker image

Recording my test running AWS CLI in a docker image.

## get a base ubuntu image

# docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
...

## install the Aws Cli and commit to a image

# docker run -it --name awscli ubuntu /bin/bash
root@25b777958aad:/# apt update
root@25b777958aad:/# apt upgrade
root@25b777958aad:/# apt install awscli
root@25b777958aad:/# exit

# docker commit 25b777958aad awscli
sha256:9e1f0fef4051c86c3e1b9beecd20b29a3f11f86b5a63f1d03fefc41111f8fb47

## alias to run a docker image with cli commands

# alias awscli="docker run -it --name aws-iqonda --rm -e AWS_DEFAULT_REGION='us-east-1' -e AWS_ACCESS_KEY_ID='<...>' -e AWS_SECRET_ACCESS_KEY='<...>' --entrypoint aws awscli"

# awscli s3 ls | grep ls-al
2016-02-17 15:43:57 j.ls-al.com

# awscli ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`].Value|[0],State.Name,PrivateIpAddress,PublicIpAddress]' --output text
i-0e38cd17dfed16658	ec2server	running	172.31.48.7	xxx.xxx.xxx.xxx

## one way to hide key variables with pass/gpg https://blog.gruntwork.io/authenticating-to-aws-with-environment-variables-e793d6f6d02e

$ pass init <email@addr.ess>
$ pass insert awscreds-iqonda/aws-access-key-id
$ pass insert awscreds-iqonda/aws-secret-access-key

$ pass
Password Store
└── awscreds-iqonda
    ├── aws-access-key-id
    └── aws-secret-access-key

$ pass awscreds-iqonda/aws-access-key-id
<...>
$ pass awscreds-iqonda/aws-secret-access-key
<...>

$ export AWS_ACCESS_KEY_ID=$(pass awscreds-iqonda/aws-access-key-id)
$ export AWS_SECRET_ACCESS_KEY=$(pass awscreds-iqonda/aws-secret-access-key)

** TODO: how to batch this? this is fine for desktop use but I do not want a gpg keyring password prompt either text or graphic in a server scripting situation. Maybe look at hashicorp vault?

$ env | grep AWS
AWS_SECRET_ACCESS_KEY=<...>
AWS_ACCESS_KEY_ID=<...>

## for convenience use an alias
$ alias awscli="sudo docker run -it --name aws-iqonda --rm -e AWS_DEFAULT_REGION='us-east-1' -e AWS_ACCESS_KEY_ID='$AWS_ACCESS_KEY_ID' -e AWS_SECRET_ACCESS_KEY='$AWS_SECRET_ACCESS_KEY' --entrypoint aws awscli"

$ awscli s3 ls 

Some useful References:

  • https://www.tecmint.com/install-run-and-delete-applications-inside-docker-containers/
  • https://blog.gruntwork.io/authenticating-to-aws-with-environment-variables-e793d6f6d02e
  • https://aws.amazon.com/blogs/aws/aws-secrets-manager-store-distribute-and-rotate-credentials-securely/
  • https://lostechies.com/gabrielschenker/2016/09/21/easing-the-use-of-the-aws-cli/
  • https://medium.com/@hudsonmendes/docker-have-a-ubuntu-development-machine-within-seconds-from-windows-or-mac-fd2f30a338e4
  • https://unix.stackexchange.com/questions/60213/gpg-asks-for-password-even-with-passphrase

Ubuntu server 20.04 zfs root and OCI

My experiment to:

  • create an Ubuntu 20.04 (not final release as of Feb 20) server in Virtualbox
  • setup server with a ZFS root disk
  • enable serial console
  • Virtualbox export to OCI

As you probably know newer desktop versions of Ubuntu will offer ZFS for root volume during installation. I am not sure if that is true in Ubuntu 20.04 server installs and when I looked at the 19.10 ZFS installation I did not necessarily want to use the ZFS layout they did. My experiment is my custom case and also tested on Ubuntu 16.04 and 18.04.

Note this is an experiment and ZFS layout, boot partition type, LUKS, EFI, multiple boot disks(mirrored) and netplan are all debatable configurations. Mine may not be ideal but it works for my use case.

Also my goal here was to export a bootable/usable OCI (Oracle Cloud Infrastructure) compute instance.

Start by booting a recent desktop live CD. Since I am testing 20.04 (focal) I used that. In the live cd environment open a terminal, sudo and apt install ssh. Start the ssh service and set ubuntu user password.

$ ssh ubuntu@192.168.1.142
$ sudo -i

##
apt-add-repository universe
apt update
apt install --yes debootstrap gdisk zfs-initramfs

## find correct device name for below
DISK=/dev/disk/by-id/ata-VBOX_HARDDISK_VB26c080f2-2bd16227
USER=ubuntu
HOST=server
POOL=ubuntu

##
sgdisk --zap-all $DISK
sgdisk --zap-all $DISK
sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
sgdisk     -n3:0:+1G      -t3:BF01 $DISK
sgdisk     -n4:0:0        -t4:BF01 $DISK
sgdisk --print $DISK

##
zpool create -o ashift=12 -d \
    -o feature@async_destroy=enabled \
    -o feature@bookmarks=enabled \
    -o feature@embedded_data=enabled \
    -o feature@empty_bpobj=enabled \
    -o feature@enabled_txg=enabled \
    -o feature@extensible_dataset=enabled \
    -o feature@filesystem_limits=enabled \
    -o feature@hole_birth=enabled \
    -o feature@large_blocks=enabled \
    -o feature@lz4_compress=enabled \
    -o feature@spacemap_histogram=enabled \
    -o feature@userobj_accounting=enabled \
    -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
    -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/ -R /mnt bpool ${DISK}-part3

zpool create -o ashift=12 \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/ -R /mnt rpool ${DISK}-part4

zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT

zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu

zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu
zfs mount bpool/BOOT/ubuntu

## Note: I skipped creating datasets for home, root, var/lib/ /var/log etc etc

##
debootstrap focal /mnt
zfs set devices=off rpool

## 
cat > /mnt/etc/netplan/01-netcfg.yaml<< EOF
network:
  version: 2
  ethernets:
    enp0s3:
      dhcp4: true
EOF

##
cat > /mnt/etc/apt/sources.list<< EOF
deb http://archive.ubuntu.com/ubuntu focal main universe
EOF

##
mount --rbind /dev  /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys  /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK bash --login

##
locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales
echo "US/Central" > /etc/timezone    
dpkg-reconfigure -f noninteractive tzdata

##
passwd

##
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes grub-pc
grub-probe /boot

##update-initramfs -u -k all  <- this does not work. try below 
KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

# edit /etc/default/grub
GRUB_DEFAULT=0
#GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=5
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu console=tty1 console=ttyS0,115200"
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200"

##
update-grub
grub-install $DISK

##
cat > /etc/systemd/system/zfs-import-bpool.service<< EOF
[Unit]
  DefaultDependencies=no
  Before=zfs-import-scan.service
  Before=zfs-import-cache.service
    
[Service]
  Type=oneshot
  RemainAfterExit=yes
  ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    
[Install]
  WantedBy=zfs-import.target
EOF

systemctl enable zfs-import-bpool.service

##
zfs set mountpoint=legacy bpool/BOOT/ubuntu
echo bpool/BOOT/ubuntu /boot zfs \
    nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
zfs snapshot bpool/BOOT/ubuntu@install
zfs snapshot rpool/ROOT/ubuntu@install

##
systemctl enable serial-getty@ttyS0
apt install ssh
systemctl enable ssh

##
exit
##
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

##
reboot

** detach live cd

NOTE: grub had a prompt on reboot but no options. try below

KERNEL=ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'
update-initramfs -u -k $KERNEL
update-grub

REF: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS

Bash Read Json Config File

Couple of things here. I wanted to do some restic scripts but at the same time use a configuration file. The restic developers is working on this functionality for restic and possibly using TOML.

Meanwhile I was trying json since I can definitely use bash/json for other applications. And as you know bash is not great at this kind of thing specifically arrays etc. So this example reads a configuration file and process the json. To further complicate things my json typically need arrays or lists of values as in the restic example you can see for folders, excludes and tags.

You will also note a unique problem with bash. When using while loops with a pipe into the while a subshell is used and you can’t use variable in your main shell. So my appending to a variable inside the while loop does not produce any strings. In bash 4.2 you can use “shopt -s latpipe” to get around this. Apparently this is not a problem with ksh.

This is not a working restic script. This is a script to read a configuration file. It just happen to be for something I am going to do with restic.

Example json config file.

$ cat restic-jobs.json 
{ "Jobs":
  [
   {
    "jobname": "aws-s3",
    "repo": "sftp:myuser@192.168.1.112:/TANK/RESTIC-REPO",
    "sets":
      [
       {
        "folders": [ "/DATA" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "data","biz" ]
       },
       {
        "folders": [ "/ARCHIVE" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "archive","biz" ]
       }
      ],
      "quiet": "true"
    },
    {
     "jobname": "azure-onedrive",
     "repo":  "rclone:azure-onedrive:restic-backups",
     "sets":
       [
       {
        "folders": [ "/DATA" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "data","biz" ]
       },
       {
        "folders": [ "/ARCHIVE" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "archive","biz" ]
       }
      ],
     "quiet": "true"
    }
  ]
} 

Script details.

cat restic-jobs.sh 
#!/bin/bash
#v0.9.1

JOB="aws-s3"
eval "$(jq --arg JOB ${JOB} -r '.Jobs[] | select(.jobname==$JOB) | del(.sets) | to_entries[] | .key + "=\"" + .value + "\""' restic-jobs.json)"
if [[ "$jobname" == "" ]]; then
  echo "no job found in config: " $JOB
  exit
fi

echo "found: $jobname"

#sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB) | .sets | .[]' restic-jobs.json )

echo

sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB)' restic-jobs.json)

backup_jobs=()
## need this for bash issue with variables and pipe subshell
shopt -s lastpipe

echo $sets | jq -rc '.sets[]' | while IFS='' read set;do
    cmd_line="restic backup -q --json "

    folders=$(echo "$set" | jq -r '.folders | .[]')
    for st in $folders; do cmd_line+=" $st"; done
    excludes=$(echo "$set" | jq -r '.excludes | .[]')
    for st in $excludes; do cmd_line+=" --exclude $st"; done
    tags=$(echo "$set" | jq -r '.tags | .[]')
    for st in $tags; do cmd_line+=" --tag $st"; done

    backup_jobs+=("$cmd_line")
done

for i in "${backup_jobs[@]}"; do
  echo "cmd_line: $i"
done

Script run example. Note I am not passing the job name just hard code at the top for my test.

./restic-jobs.sh 
found: iqonda-aws-s3

cmd_line: restic backup -q --json  /DATA --exclude .snapshots --exclude temp --tag iqonda --tag biz
cmd_line: restic backup -q --json  /ARCHIVE --exclude .snapshots --exclude temp --tag iqonda --tag biz

Bash array json restic snapshots and jq

As you know bash is not ideal with multi arrays. Frequently I find myself wanting to read something like json into bash and loop over it. There are many ways to do this including readarray etc. I found this to work best for me. Note json can have lists so I collapse those with jq’s join. Example:

# cat restic-loop-snaps.sh 
#!/bin/bash

function loopOverArray(){

   restic snapshots --json | jq -r '.' | jq -c '.[]'| while read i; do
	id=$(echo "$i" | jq -r '.| .short_id')
	ctime=$(echo "$i" | jq -r '.| .time')
	hostname=$(echo "$i" | jq -r '.| .hostname')
	paths=$(echo "$i" | jq -r '. | .paths | join(",")')
	tagss=$(echo "$i" | jq -r '. | .tags | join(",")')
	printf "%-10s - %-40s - %-20s - %-30s - %-20s\n" $id $ctime $hostname $paths $tags
    done
}

loopOverArray

# ./restic-loop-snaps.sh 
0a71b5d4   - 2019-05-31T05:03:20.655922639-05:00      - pop-os               - /DATA/MyWorkDocs        -                     
...

Restic snapshot detail json to csv

Restic shows details of a snapshot. Sometimes you want that to be CSV but the json output for paths, excludes and tags are lists which will choke the @csv jq filter. Furthermore not all snapshots have the excludes key. Here are some snippets on solving above. Use join to collapse the lists and use if to test if key exists.

# restic -r $REPO snapshots --last --json | jq -r '.[] | [.hostname,.short_id,.time,(.paths|join(",")),if (.excludes) then (.excludes|join(",")) else empty end]'
[
  "bkupserver.domain.com",
  "c56d3e2e",
  "2019-10-25T00:10:01.767408581-05:00",
  "/etc,/home,/root,/u01/backuplogs,/var/log,/var/spool/cron",
  "**/diag/**,/var/spool/lastlog"
]

And using CSV filter

# restic -r $REPO snapshots --last --json | jq -r '.[] | [.hostname,.short_id,.time,(.paths|join(",")),if (.excludes) then (.excludes|join(",")) else empty end] | @csv'
"bkupserver.domain.com","c56d3e2e","2019-10-25T00:10:01.767408581-05:00","/etc,/home,/root,/u01/backuplogs,/var/log,/var/spool/cron","**/diag/**,/var/spool/lastlog"

zfsbackup-go test with minio server

Recording my test with zfsbackup-go. While I am playing around with backup/DR/object storage I also compared the concept here with a previous test around restic/rclone/object storage.

In general ZFS snapshot and replication should work much better with file systems containing huge numbers of files. Most solutions struggle with millions of files and rsync on file level and restic/rclone on object storage level. Walking the tree is just never efficient. So this test works well but has not been scaled yet. I plan to work on that as well as seeing how well the bucket can be synced to different regions.

Minio server

Tip: minio server has a nice browser interface

# docker run -p 9000:9000 --name minio1 -e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" -e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" -v /DATA/minio-repos/:/minio-repos minio/minio server /minio-repos

 You are running an older version of MinIO released 1 week ago 
 Update: docker pull minio/minio:RELEASE.2019-07-17T22-54-12Z 


Endpoint:  http://172.17.0.2:9000  http://127.0.0.1:9000

Browser Access:
   http://172.17.0.2:9000  http://127.0.0.1:9000

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

server 1:

This server simulate our “prod” server. We create an initial data set in /DATA on our server, take snapshot and backup to object storage.

# rsync -a /media/sf_DATA/MyWorkDocs /DATA/

# du -sh /DATA/MyWorkDocs/
1.5G	/DATA/MyWorkDocs/

# export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
# export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# export AWS_S3_CUSTOM_ENDPOINT=http://192.168.1.112:9000
# export AWS_REGION=us-east-1

# zfs snapshot DATA@20190721-0752

# /usr/local/bin/zfsbackup-go send --full DATA s3://zfs-poc
2019/07/21 07:53:12 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Done.
	Total ZFS Stream Bytes: 1514016976 (1.4 GiB)
	Total Bytes Written: 1176757570 (1.1 GiB)
	Elapsed Time: 1m17.522630438s
	Total Files Uploaded: 7

# /usr/local/bin/zfsbackup-go list s3://zfs-poc
2019/07/21 07:56:57 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Found 1 backup sets:

Volume: DATA
	Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Replication: false
	Archives: 6 - 1176757570 bytes (1.1 GiB)
	Volume Size (Raw): 1514016976 bytes (1.4 GiB)
	Uploaded: 2019-07-21 07:53:12.42972167 -0500 CDT (took 1m16.313538867s)


There are 4 manifests found locally that are not on the target destination.

server 2:

This server is a possible DR or new server but the idea is somewhere else preferably another cloud region or data center.

# export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
# export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# export AWS_S3_CUSTOM_ENDPOINT=http://192.168.1.112:9000
# export AWS_REGION=us-east-1
# /usr/local/bin/zfsbackup-go list s3://zfs-poc
2019/07/21 07:59:16 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Found 1 backup sets:

Volume: DATA
	Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Replication: false
	Archives: 6 - 1176757570 bytes (1.1 GiB)
	Volume Size (Raw): 1514016976 bytes (1.4 GiB)
	Uploaded: 2019-07-21 07:53:12.42972167 -0500 CDT (took 1m16.313538867s)

# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
DATA  2.70M  96.4G    26K  /DATA
# zfs list -t snapshot
no datasets available
# ls /DATA/

** using -F. This is a COLD DR style test with no existing infrastructure/ZFS sets on target systems
# /usr/local/bin/zfsbackup-go receive --auto DATA s3://zfs-poc DATA -F
2019/07/21 08:05:28 Ignoring user provided number of cores (2) and using the number of detected cores (1).
2019/07/21 08:06:42 Done. Elapsed Time: 1m13.968871681s
2019/07/21 08:06:42 Done.
# ls /DATA/
MyWorkDocs
# du -sh /DATA/MyWorkDocs/
1.5G	/DATA/MyWorkDocs/
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
DATA  1.41G  95.0G  1.40G  /DATA
# zfs list -t snapshot
NAME                 USED  AVAIL  REFER  MOUNTPOINT
DATA@20190721-0752   247K      -  1.40G  -

That concludes one test. In theory that is a cold DR situation where you have nothing really ready until you need it. So think build a server and recover /DATA from zfs backup in object storage. So initial restore will be very long depending on your size.

Read on if you are thinking you want to go more towards pilot light or warm DR we can run incremental backups, then on the target server keep receiving snapshots periodically into our target ZFS file system DATA. You may observe why not just do real ZFS send/receive an no object storage in between. There is no good answer except there are many ways you could solve DR and this is one of them. In this case I could argue object storage is cheap and has some very good redundancy/availability features. And your replication between regions may be using a back haul very fast/cheap channel where your VPN or fastconnect WAN between regions may be slow and/or expensive.

You could also be thinking something between cold and warm DR is where you want to be and therefore only apply the full DATA receive when you are ready. That could mean a lot of snapshots likely to apply afterwards. Or maybe not I have not checked on that aspect of a recovery process.

Regardless I like the idea of leveraging zfs with object storage so you may not have a use for this but I definitely will.

Incremental snapshots:

server 1:

Add more data to source, snapshot and backup to object storage.

# rsync -a /media/sf_DATA/MySrc /DATA/
# du -sh /DATA/MySrc/
1.1M	/DATA/MySrc/

# zfs snapshot DATA@20190721-0809
# zfs list -t snapshot
NAME                 USED  AVAIL  REFER  MOUNTPOINT
DATA@20190721-0752    31K      -  1.40G  -
DATA@20190721-0809     0B      -  1.41G  -

# /usr/local/bin/zfsbackup-go send --increment DATA s3://zfs-poc
2019/07/21 08:10:49 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Done.
	Total ZFS Stream Bytes: 1202792 (1.1 MiB)
	Total Bytes Written: 254909 (249 KiB)
	Elapsed Time: 228.123591ms
	Total Files Uploaded: 2

# /usr/local/bin/zfsbackup-go list s3://zfs-poc
2019/07/21 08:11:17 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Found 2 backup sets:

Volume: DATA
	Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Replication: false
	Archives: 6 - 1176757570 bytes (1.1 GiB)
	Volume Size (Raw): 1514016976 bytes (1.4 GiB)
	Uploaded: 2019-07-21 07:53:12.42972167 -0500 CDT (took 1m16.313538867s)


Volume: DATA
	Snapshot: 20190721-0809 (2019-07-21 08:09:47 -0500 CDT)
	Incremental From Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Intermediary: false
	Replication: false
	Archives: 1 - 254909 bytes (249 KiB)
	Volume Size (Raw): 1202792 bytes (1.1 MiB)
	Uploaded: 2019-07-21 08:10:49.3280703 -0500 CDT (took 214.139056ms)

There are 4 manifests found locally that are not on the target destination.

server 2:

# /usr/local/bin/zfsbackup-go list s3://zfs-poc
2019/07/21 08:11:44 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Found 2 backup sets:

Volume: DATA
	Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Replication: false
	Archives: 6 - 1176757570 bytes (1.1 GiB)
	Volume Size (Raw): 1514016976 bytes (1.4 GiB)
	Uploaded: 2019-07-21 07:53:12.42972167 -0500 CDT (took 1m16.313538867s)


Volume: DATA
	Snapshot: 20190721-0809 (2019-07-21 08:09:47 -0500 CDT)
	Incremental From Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Intermediary: false
	Replication: false
	Archives: 1 - 254909 bytes (249 KiB)
	Volume Size (Raw): 1202792 bytes (1.1 MiB)
	Uploaded: 2019-07-21 08:10:49.3280703 -0500 CDT (took 214.139056ms)

** not sure why I need to force (-F) maybe because data set is mounted? message like this:
** cannot receive incremental stream: destination DATA has been modified since most recent snapshot
*** 2019/07/21 08:12:25 Error while trying to read from volume DATA|20190721-0752|to|20190721-0809.zstream.gz.vol1 - io: read/write on closed pipe

# /usr/local/bin/zfsbackup-go receive --auto DATA s3://zfs-poc DATA -F
2019/07/21 08:12:53 Ignoring user provided number of cores (2) and using the number of detected cores (1).
2019/07/21 08:12:54 Done. Elapsed Time: 379.712693ms
2019/07/21 08:12:54 Done.

# ls /DATA/
MySrc  MyWorkDocs
# du -sh /DATA/MySrc/
1.1M	/DATA/MySrc/
# zfs list -t snapshot
NAME                 USED  AVAIL  REFER  MOUNTPOINT
DATA@20190721-0752    30K      -  1.40G  -
DATA@20190721-0809    34K      -  1.41G  -

LINK: https://github.com/someone1/zfsbackup-go

OCI Bucket Delete Fail

If you have trouble deleting an object storage bucket in Oracle Cloud Infrastructure you may have to clear old multipart uploads. The message may look something like this: Bucket named ‘DR-Validation’ has pending multipart uploads. Stop all multipart uploads first.

At the time the only way I could do this was through the API. Did not appear like the CLI or Console could clear out the upload. Below is a little python that may help. Below is an example just to show the idea. And of course if you have thousands of multipart uploads(yes its possible); you will need to change this was only for one or two.

#!/usr/bin/python
#: Script Name  : lobjectparts.py
#: Author       : Riaan Rossouw
#: Date Created : June 13, 2019
#: Date Updated : July 18, 2019
#: Description  : Python Script to list multipart uploads
#: Examples     : lobjectparts.py -t tenancy -r region -b bucket
#:              : lobjectparts.py --tenancy <ocid> --region  <region> --bucket <bucket>

## Will need the api modules
## new: https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/
## old: https://oracle-bare-metal-cloud-services-python-sdk.readthedocs.io/en/latest/installation.html#install
## https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/api/object_storage/client/oci.object_storage.ObjectStorageClient.html

from __future__ import print_function
import os, optparse, sys, time, datetime
import oci

__version__ = '0.9.1'
optdesc = 'This script is used to list multipart uploads in a bucket'

parser = optparse.OptionParser(version='%prog version ' + __version__)
parser.formatter.max_help_position = 50
parser.add_option('-t', '--tenancy', help='Specify Tenancy ocid', dest='tenancy', action='append')
parser.add_option('-r', '--region', help='region', dest='region', action='append')
parser.add_option('-b', '--bucket', help='bucket', dest='bucket', action='append')

opts, args = parser.parse_args()

def showMultipartUploads(identity, bucket_name):
  object_storage = oci.object_storage.ObjectStorageClient(config)
  namespace_name = object_storage.get_namespace().data
  uploads = object_storage.list_multipart_uploads(namespace_name, bucket_name, limit = 1000).data
  print(' {:35}  | {:15} | {:30} | {:35} | {:20}'.format('bucket','namespace','object','time_created','upload_id'))
  for o in uploads:
    print(' {:35}  | {:15} | {:30} | {:35} | {:20}'.format(o.bucket, o.namespace, o.object, str(o.time_created), o.upload_id))
    confirm = input("Confirm if you want to abort this multipart upload (Y/N): ")
    if confirm == "Y":
      response = object_storage.abort_multipart_upload(o.namespace, o.bucket, o.object, o.upload_id).data
    else:
      print ("Chose to not do the abort action on this multipart upload at this time...")

def main():
  mandatories = ['tenancy','region','bucket']
  for m in mandatories:
    if not opts.__dict__[m]:
      print ("mandatory option is missing\n")
      parser.print_help()
      exit(-1)

  print ('Multipart Uploads')
  config['region'] = opts.region[0]
  identity = oci.identity.IdentityClient(config)
  showMultipartUploads(identity, opts.bucket)

if __name__ == '__main__':
  config = oci.config.from_file("/root/.oci/config","oci.api")
  main()

Linux Screen Utility Buffer Scrolling

If you use the Linux screen utility a lot for long running jobs etc you may have experienced scrolling issues. The quickest way is to try Control-a and then Escape. You should now be able to use Up/Down keys or even PgUp/PgDn. Press Escape to exit scrolling.

In my case most of the time the terminal is running in a Virtualbox guest so you may or may not have to take into account Virtualbox key grabbing/assigning.

Bash Array Dynamic Name

Sometimes you want to have dynamic array names to simplify code. Below is one way of making the array name dynamic in a loop.

#!/bin/bash

section1=(
 fs-01
 fs-02
)
section2=(
 fs-03
)

function snap() {
  tag=$1
  echo
  echo "TAG: $tag"
  x=$tag
  var=$x[@]
  for f in "${!var}"
  do
    echo "fss: $f"
  done
}

snap "section1"
snap "section2"

And output like this.

# ./i.sh

TAG: section1
fss: fs-01
fss: fs-02

TAG: section2
fss: fs-03