Hashicorp Vault Test

Recording a quick test of Vault.

## hashicorp vault: https://www.vaultproject.io
## download vault executable and move to /usr/sbin so we have it in the path for this test. should rather be in /usr/local/bin

$ vault -autocomplete-install
$ exec $SHELL

$ vault server -dev
==> Vault server configuration:

             Api Address: http://127.0.0.1:8200
                     Cgo: disabled
         Cluster Address: https://127.0.0.1:8201
              Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: inmem
                 Version: Vault v1.3.4

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.
...

## new terminal 
$ export VAULT_ADDR='http://127.0.0.1:8200'
$ export VAULT_DEV_ROOT_TOKEN_ID="<...>"

$ vault status
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.3.4
Cluster Name    vault-cluster-f802bf67
Cluster ID      aa5c7006-9c7c-c394-f1f4-1a9dafc17688
HA Enabled      false

$ vault kv put secret/awscreds-iqonda {AWS_SECRET_ACCESS_KEY=<...>,AWS_ACCESS_KEY_ID=<...>}
Key              Value
---              -----
created_time     2020-03-20T18:58:57.461120823Z
deletion_time    n/a
destroyed        false
version          4

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"]'
{
  "AWS_ACCESS_KEY_ID": "<...>",
  "AWS_SECRET_ACCESS_KEY": "<...>"
}

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"] | .AWS_ACCESS_KEY_ID'
<...>

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"] | .AWS_SECRET_ACCESS_KEY'
<...>

Using AWS CLI Docker image

Recording my test running AWS CLI in a docker image.

## get a base ubuntu image

# docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
...

## install the Aws Cli and commit to a image

# docker run -it --name awscli ubuntu /bin/bash
root@25b777958aad:/# apt update
root@25b777958aad:/# apt upgrade
root@25b777958aad:/# apt install awscli
root@25b777958aad:/# exit

# docker commit 25b777958aad awscli
sha256:9e1f0fef4051c86c3e1b9beecd20b29a3f11f86b5a63f1d03fefc41111f8fb47

## alias to run a docker image with cli commands

# alias awscli="docker run -it --name aws-iqonda --rm -e AWS_DEFAULT_REGION='us-east-1' -e AWS_ACCESS_KEY_ID='<...>' -e AWS_SECRET_ACCESS_KEY='<...>' --entrypoint aws awscli"

# awscli s3 ls | grep ls-al
2016-02-17 15:43:57 j.ls-al.com

# awscli ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`].Value|[0],State.Name,PrivateIpAddress,PublicIpAddress]' --output text
i-0e38cd17dfed16658	ec2server	running	172.31.48.7	xxx.xxx.xxx.xxx

## one way to hide key variables with pass/gpg https://blog.gruntwork.io/authenticating-to-aws-with-environment-variables-e793d6f6d02e

$ pass init <email@addr.ess>
$ pass insert awscreds-iqonda/aws-access-key-id
$ pass insert awscreds-iqonda/aws-secret-access-key

$ pass
Password Store
└── awscreds-iqonda
    ├── aws-access-key-id
    └── aws-secret-access-key

$ pass awscreds-iqonda/aws-access-key-id
<...>
$ pass awscreds-iqonda/aws-secret-access-key
<...>

$ export AWS_ACCESS_KEY_ID=$(pass awscreds-iqonda/aws-access-key-id)
$ export AWS_SECRET_ACCESS_KEY=$(pass awscreds-iqonda/aws-secret-access-key)

** TODO: how to batch this? this is fine for desktop use but I do not want a gpg keyring password prompt either text or graphic in a server scripting situation. Maybe look at hashicorp vault?

$ env | grep AWS
AWS_SECRET_ACCESS_KEY=<...>
AWS_ACCESS_KEY_ID=<...>

## for convenience use an alias
$ alias awscli="sudo docker run -it --name aws-iqonda --rm -e AWS_DEFAULT_REGION='us-east-1' -e AWS_ACCESS_KEY_ID='$AWS_ACCESS_KEY_ID' -e AWS_SECRET_ACCESS_KEY='$AWS_SECRET_ACCESS_KEY' --entrypoint aws awscli"

$ awscli s3 ls 

Some useful References:

  • https://www.tecmint.com/install-run-and-delete-applications-inside-docker-containers/
  • https://blog.gruntwork.io/authenticating-to-aws-with-environment-variables-e793d6f6d02e
  • https://aws.amazon.com/blogs/aws/aws-secrets-manager-store-distribute-and-rotate-credentials-securely/
  • https://lostechies.com/gabrielschenker/2016/09/21/easing-the-use-of-the-aws-cli/
  • https://medium.com/@hudsonmendes/docker-have-a-ubuntu-development-machine-within-seconds-from-windows-or-mac-fd2f30a338e4
  • https://unix.stackexchange.com/questions/60213/gpg-asks-for-password-even-with-passphrase

Restic PowerShell Script

Just for my reference my quick and dirty Windows backup script for restic. I left some of the rclone, jq lines in but commented out. Depending on how to handle logging it may be helpful. In one project I pushed the summary json output to a S3 bucket. In this version I ran a restic job to backup the log since the initial job won’t contain the log still being generated of course.

For this way of logging ie keeping the logs in restic not rclone/jq/buckets potentially when reporting you will dump the log from the latest repo like so:

$ restic -r s3:s3.amazonaws.com/restic-windows-backup-poc.<domain>.com dump latest /C/Software/restic-backup/jobs/desktop-l0qamrb/2020-03-10-1302-restic-backup.json | jq
 {
   "message_type": "summary",
   "files_new": 0,
   "files_changed": 1,
   "files_unmodified": 12,
   "dirs_new": 0,
   "dirs_changed": 2,
   "dirs_unmodified": 3,
   "data_blobs": 1,
   "tree_blobs": 3,
   "data_added": 2839,
   "total_files_processed": 13,
   "total_bytes_processed": 30386991,
   "total_duration": 1.0223828,
   "snapshot_id": "e9531e66"
 }

Here is restic-backup.ps1. Note the hidden file for the restic variables and encryption key of course. And I am doing forget/prune here but that should be done weekly.

##################################################################
#Custom variables
. .\restic-keys.ps1
$DateStr = $(get-date -f yyyy-MM-dd-HHmm)
$server = $env:COMPUTERNAME.ToLower()
$logtop = "jobs"
$restichome = "C:\Software\restic-backup"
###################################################################

if ( -not (Test-Path -Path "$restichome\${logtop}\${server}" -PathType Container) ) 
{ 
   New-Item -ItemType directory -Path $restichome\${logtop}\${server} 
}

$jsonfilefull = ".\${logtop}\${server}\${DateStr}-restic-backup-full.json"
$jsonfilesummary = ".\${logtop}\${server}\${DateStr}-restic-backup.json"

.\restic.exe backup $restichome Y:\Docs\ --exclude $restichome\$logtop --tag prod --exclude 'omc\**' --quiet --json | Out-File ${jsonfilefull} -encoding ascii

#Get-Content ${jsonfilefull} | .\jq-win64.exe -r 'select(.message_type==\"summary\")' | Out-file ${jsonfilesummary} -encoding ascii
cat ${jsonfilefull} | Select-String -Pattern summary | Out-file ${jsonfilesummary} -encoding ascii -NoNewline
del ${jsonfilefull}

#.\rclone --config rclone.conf copy .\${logtop} s3_ash:restic-backup-logs
.\restic.exe backup $restichome\$logtop --tag logs --quiet

del ${jsonfilesummary}

.\restic forget -q --prune --keep-hourly 5 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 5

Restic recover OS

My test to recover an Ubuntu server OS from a backup.

Note the following:

  • I used Ubuntu 20.04 (focal) which is still beta at the time of this POC. In theory Ubuntu 18.04 should work the same or better.
  • For an OS recovery I documented the backup elsewhere. It was something like this for me and yours will vary of course:
    restic –exclude={/dev/*,/media,/mnt/*,/proc/*,/run/*,/sys/*,/tmp/*,/var/tmp/*,/swapfile} backup / /dev/{console,null}
  • For the partition recovery I saved it on the source server to a file for easy copy/paste during the recovery: sfdisk -d > /tmp/partition-table
  • I tested restic repo’s with both sftp and AWS s3.
  • Used a Virtualbox VM named u20.04-restic-os-restored. Made the recover server disk 15G (5G larger than the original 10G where backup was done)
  • My POC consist of a very simple disk layout ie one ext4 partition only. It was just the default install from this Ubuntu 20.04 desktop install. Complicated boot disk layouts may be very different. I am not interested in recovering servers with complicated OS disk layouts. To me that does not fit with modern infrastructure and concepts like auto scaling. Boot disks should be lean and easily recovered/provisioned through scripting; and configuration applied with configuration management tools.
  • boot liveCD, set ubuntu user password and install and start ssh so we can ssh into and easier to copy/paste etc.
  • Abbreviated commands (removed most output to shorten)
$ ssh ubuntu@192.168.1.160
$ sudo -i

# export AWS_ACCESS_KEY_ID=<secret..>
# export AWS_SECRET_ACCESS_KEY=<secret..>
# export RESTIC_PASSWORD=<secret..>
# export RESTIC_REPOSITORY="sftp:rr@192.168.1.111:/ARCHIVE/restic-os-restore-poc"
# cd /usr/local/bin/
# wget https://github.com/restic/restic/releases/download/v0.9.6/restic_0.9.6_linux_amd64.bz2
# bzip2 -d restic_0.9.6_linux_amd64.bz2 
# mv restic_0.9.6_linux_amd64 restic
# chmod +x restic 
# mkdir /mnt/restore
# sfdisk -d /dev/sda < partition-table
# mkfs.ext4 /dev/sda1
# mkdir /mnt/restore/
# mount /dev/sda1 /mnt/restore/
# /usr/local/bin/restic snapshots
# time /usr/local/bin/restic restore latest -t /mnt/restore --exclude '/etc/fstab' --exclude '/etc/crypttab' --exclude '/boot/grub/grub.cfg' --exclude '/etc/default/grub'

# mount --bind /dev /mnt/restore/dev
# mount -t sysfs sys /mnt/restore/sys
# mount -t proc proc /mnt/restore/proc
# mount -t devpts devpts /mnt/restore/dev/pts
# mount -t tmpfs tmp /mnt/restore/tmp
# mount --rbind /run /mnt/restore/run
# mount -t tmpfs tmp /mnt/restore/tmp

# chroot /mnt/restore /bin/bash
# lsblk | grep sda
# grub-install /dev/sda
# update-grub
# blkid | grep sda

# UUID=$(blkid | grep sda | cut -d' ' -f2 | cut -d\" -f2)
# echo "$UUID / ext4    errors=remount-ro 0       1" > /etc/fstab

# sync
# exit
# init 0

Note:

New server booted and worked but graphics (GNOME login) login for ubuntu account stalled on login. This fixed it: dconf reset -f /org/gnome/

My restic backup command works but just for reference since restic has no include flag rsync seem to have a better exclude/include functionality syntax like this: rsync –include=/dev/{console,null} –exclude={/dev/,/proc/,/sys/,/tmp/,/run/,/mnt/,/media/*,/lost+found}

Ubuntu server 20.04 zfs root and OCI

My experiment to:

  • create an Ubuntu 20.04 (not final release as of Feb 20) server in Virtualbox
  • setup server with a ZFS root disk
  • enable serial console
  • Virtualbox export to OCI

As you probably know newer desktop versions of Ubuntu will offer ZFS for root volume during installation. I am not sure if that is true in Ubuntu 20.04 server installs and when I looked at the 19.10 ZFS installation I did not necessarily want to use the ZFS layout they did. My experiment is my custom case and also tested on Ubuntu 16.04 and 18.04.

Note this is an experiment and ZFS layout, boot partition type, LUKS, EFI, multiple boot disks(mirrored) and netplan are all debatable configurations. Mine may not be ideal but it works for my use case.

Also my goal here was to export a bootable/usable OCI (Oracle Cloud Infrastructure) compute instance.

Start by booting a recent desktop live CD. Since I am testing 20.04 (focal) I used that. In the live cd environment open a terminal, sudo and apt install ssh. Start the ssh service and set ubuntu user password.

$ ssh ubuntu@192.168.1.142
$ sudo -i

##
apt-add-repository universe
apt update
apt install --yes debootstrap gdisk zfs-initramfs

## find correct device name for below
DISK=/dev/disk/by-id/ata-VBOX_HARDDISK_VB26c080f2-2bd16227
USER=ubuntu
HOST=server
POOL=ubuntu

##
sgdisk --zap-all $DISK
sgdisk --zap-all $DISK
sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
sgdisk     -n3:0:+1G      -t3:BF01 $DISK
sgdisk     -n4:0:0        -t4:BF01 $DISK
sgdisk --print $DISK

##
zpool create -o ashift=12 -d \
    -o feature@async_destroy=enabled \
    -o feature@bookmarks=enabled \
    -o feature@embedded_data=enabled \
    -o feature@empty_bpobj=enabled \
    -o feature@enabled_txg=enabled \
    -o feature@extensible_dataset=enabled \
    -o feature@filesystem_limits=enabled \
    -o feature@hole_birth=enabled \
    -o feature@large_blocks=enabled \
    -o feature@lz4_compress=enabled \
    -o feature@spacemap_histogram=enabled \
    -o feature@userobj_accounting=enabled \
    -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
    -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/ -R /mnt bpool ${DISK}-part3

zpool create -o ashift=12 \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/ -R /mnt rpool ${DISK}-part4

zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT

zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu

zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu
zfs mount bpool/BOOT/ubuntu

## Note: I skipped creating datasets for home, root, var/lib/ /var/log etc etc

##
debootstrap focal /mnt
zfs set devices=off rpool

## 
cat > /mnt/etc/netplan/01-netcfg.yaml<< EOF
network:
  version: 2
  ethernets:
    enp0s3:
      dhcp4: true
EOF

##
cat > /mnt/etc/apt/sources.list<< EOF
deb http://archive.ubuntu.com/ubuntu focal main universe
EOF

##
mount --rbind /dev  /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys  /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK bash --login

##
locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales
echo "US/Central" > /etc/timezone    
dpkg-reconfigure -f noninteractive tzdata

##
passwd

##
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes grub-pc
grub-probe /boot

##update-initramfs -u -k all  <- this does not work. try below 
KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

# edit /etc/default/grub
GRUB_DEFAULT=0
#GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=5
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu console=tty1 console=ttyS0,115200"
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200"

##
update-grub
grub-install $DISK

##
cat > /etc/systemd/system/zfs-import-bpool.service<< EOF
[Unit]
  DefaultDependencies=no
  Before=zfs-import-scan.service
  Before=zfs-import-cache.service
    
[Service]
  Type=oneshot
  RemainAfterExit=yes
  ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    
[Install]
  WantedBy=zfs-import.target
EOF

systemctl enable zfs-import-bpool.service

##
zfs set mountpoint=legacy bpool/BOOT/ubuntu
echo bpool/BOOT/ubuntu /boot zfs \
    nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
zfs snapshot bpool/BOOT/ubuntu@install
zfs snapshot rpool/ROOT/ubuntu@install

##
systemctl enable serial-getty@ttyS0
apt install ssh
systemctl enable ssh

##
exit
##
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

##
reboot

** detach live cd

NOTE: grub had a prompt on reboot but no options. try below

KERNEL=ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'
update-initramfs -u -k $KERNEL
update-grub

REF: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS

Regex Search Lines Add Quotes

Regex to search a whole file and if the line contains the search word, add beginning and ending quotes. I in this case doing it vi(m).

This works to add beginning and ending quotes to ALL lines in the file
:%s/^(.*)$/”\1″/

or simpler
:%s/.*/”&”

Explanation: By default, a pattern is interpreted as the largest possible match, so .* is interpreted as the whole line, with no need for ^ and $. And while (…) can be useful in selecting a part of a pattern, it is not needed for the whole pattern, which is represented by & in the substitution. And the final / in a search or substitution is not needed unless something else follows

However we do NOT want all lines in the file only the lines containing the search word.

This works. I am not sure if this is reliable without using anchors ^ and $ but it seems to work in my limited test.
:%s/.mysearchword./”&”

Formulas for bytes and duration

Capturing a couple formulas for future reference. I was using restic backup stats and wanted to convert the time a job ran into hours and minutes. Also convert bytes process to TB.

bytes to TB example:
total_bytes_processed: 5773378 / 1000000000000 == 1.502888851889

duration to HH:MM example using libre calc:
total_duration: 15870.197288027 / 86400 and then Format Cell > Time > HH:MM:SS will show 04:24:30

Bash Read Json Config File

Couple of things here. I wanted to do some restic scripts but at the same time use a configuration file. The restic developers is working on this functionality for restic and possibly using TOML.

Meanwhile I was trying json since I can definitely use bash/json for other applications. And as you know bash is not great at this kind of thing specifically arrays etc. So this example reads a configuration file and process the json. To further complicate things my json typically need arrays or lists of values as in the restic example you can see for folders, excludes and tags.

You will also note a unique problem with bash. When using while loops with a pipe into the while a subshell is used and you can’t use variable in your main shell. So my appending to a variable inside the while loop does not produce any strings. In bash 4.2 you can use “shopt -s latpipe” to get around this. Apparently this is not a problem with ksh.

This is not a working restic script. This is a script to read a configuration file. It just happen to be for something I am going to do with restic.

Example json config file.

$ cat restic-jobs.json 
{ "Jobs":
  [
   {
    "jobname": "aws-s3",
    "repo": "sftp:myuser@192.168.1.112:/TANK/RESTIC-REPO",
    "sets":
      [
       {
        "folders": [ "/DATA" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "data","biz" ]
       },
       {
        "folders": [ "/ARCHIVE" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "archive","biz" ]
       }
      ],
      "quiet": "true"
    },
    {
     "jobname": "azure-onedrive",
     "repo":  "rclone:azure-onedrive:restic-backups",
     "sets":
       [
       {
        "folders": [ "/DATA" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "data","biz" ]
       },
       {
        "folders": [ "/ARCHIVE" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "archive","biz" ]
       }
      ],
     "quiet": "true"
    }
  ]
} 

Script details.

cat restic-jobs.sh 
#!/bin/bash
#v0.9.1

JOB="aws-s3"
eval "$(jq --arg JOB ${JOB} -r '.Jobs[] | select(.jobname==$JOB) | del(.sets) | to_entries[] | .key + "=\"" + .value + "\""' restic-jobs.json)"
if [[ "$jobname" == "" ]]; then
  echo "no job found in config: " $JOB
  exit
fi

echo "found: $jobname"

#sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB) | .sets | .[]' restic-jobs.json )

echo

sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB)' restic-jobs.json)

backup_jobs=()
## need this for bash issue with variables and pipe subshell
shopt -s lastpipe

echo $sets | jq -rc '.sets[]' | while IFS='' read set;do
    cmd_line="restic backup -q --json "

    folders=$(echo "$set" | jq -r '.folders | .[]')
    for st in $folders; do cmd_line+=" $st"; done
    excludes=$(echo "$set" | jq -r '.excludes | .[]')
    for st in $excludes; do cmd_line+=" --exclude $st"; done
    tags=$(echo "$set" | jq -r '.tags | .[]')
    for st in $tags; do cmd_line+=" --tag $st"; done

    backup_jobs+=("$cmd_line")
done

for i in "${backup_jobs[@]}"; do
  echo "cmd_line: $i"
done

Script run example. Note I am not passing the job name just hard code at the top for my test.

./restic-jobs.sh 
found: iqonda-aws-s3

cmd_line: restic backup -q --json  /DATA --exclude .snapshots --exclude temp --tag iqonda --tag biz
cmd_line: restic backup -q --json  /ARCHIVE --exclude .snapshots --exclude temp --tag iqonda --tag biz

Bash array json restic snapshots and jq

As you know bash is not ideal with multi arrays. Frequently I find myself wanting to read something like json into bash and loop over it. There are many ways to do this including readarray etc. I found this to work best for me. Note json can have lists so I collapse those with jq’s join. Example:

# cat restic-loop-snaps.sh 
#!/bin/bash

function loopOverArray(){

   restic snapshots --json | jq -r '.' | jq -c '.[]'| while read i; do
	id=$(echo "$i" | jq -r '.| .short_id')
	ctime=$(echo "$i" | jq -r '.| .time')
	hostname=$(echo "$i" | jq -r '.| .hostname')
	paths=$(echo "$i" | jq -r '. | .paths | join(",")')
	tagss=$(echo "$i" | jq -r '. | .tags | join(",")')
	printf "%-10s - %-40s - %-20s - %-30s - %-20s\n" $id $ctime $hostname $paths $tags
    done
}

loopOverArray

# ./restic-loop-snaps.sh 
0a71b5d4   - 2019-05-31T05:03:20.655922639-05:00      - pop-os               - /DATA/MyWorkDocs        -                     
...

restic set tags

In a follow up to previous post https://blog.ls-al.com/restic-create-backup-and-set-tag-with-date-logic here is some code I used to set tags on old snapshots to comply with my new tagging and pruning.

# cat backup-tags-set.sh
#!/bin/bash

create_tag () {
  tag="daily"
  if [ $(date -d "$1" +%a) == "Sun" ]; then tag="weekly" ; fi
  if [ $(date -d "$1" +%d) == "01" ]; then 
   tag="monthly"
   if [ $(date -d "$1" +%b) == "Jan" ]; then
     tag="yearly"
   fi
  fi
}
create_tag
echo "backup policy: " $tag

#source /root/.restic.env
snapshotids=$(restic snapshots -c | egrep -v "ID|snapshots|--" | awk '//{print $1;}')
for snapshotid in $snapshotids
do
  snapdate=$(restic snapshots $snapshotid -c | egrep -v "ID|snapshots|--" | awk '//{print $2;}')
  create_tag $snapdate
  echo "Making a tag for: $snapshotid - $snapdate - $(date -d $snapdate +%a) - $tag"
  restic tag --set $tag $snapshotid
done

# ./backup-tags-set.sh 
backup policy:  daily
Making a tag for: 0b88eefa - 2019-03-27 - Wed - daily
repository 00cde088 opened successfully, password is correct
create exclusive lock for repository
modified tags on 1 snapshots
Making a tag for: d76811ac - 2019-03-27 - Wed - daily
repository 00cde088 opened successfully, password is correct
create exclusive lock for repository
modified tags on 1 snapshots