Restic PowerShell Script

Just for my reference my quick and dirty Windows backup script for restic. I left some of the rclone, jq lines in but commented out. Depending on how to handle logging it may be helpful. In one project I pushed the summary json output to a S3 bucket. In this version I ran a restic job to backup the log since the initial job won’t contain the log still being generated of course.

For this way of logging ie keeping the logs in restic not rclone/jq/buckets potentially when reporting you will dump the log from the latest repo like so:

$ restic -r s3:s3.amazonaws.com/restic-windows-backup-poc.<domain>.com dump latest /C/Software/restic-backup/jobs/desktop-l0qamrb/2020-03-10-1302-restic-backup.json | jq
 {
   "message_type": "summary",
   "files_new": 0,
   "files_changed": 1,
   "files_unmodified": 12,
   "dirs_new": 0,
   "dirs_changed": 2,
   "dirs_unmodified": 3,
   "data_blobs": 1,
   "tree_blobs": 3,
   "data_added": 2839,
   "total_files_processed": 13,
   "total_bytes_processed": 30386991,
   "total_duration": 1.0223828,
   "snapshot_id": "e9531e66"
 }

Here is restic-backup.ps1. Note the hidden file for the restic variables and encryption key of course. And I am doing forget/prune here but that should be done weekly.

##################################################################
#Custom variables
. .\restic-keys.ps1
$DateStr = $(get-date -f yyyy-MM-dd-HHmm)
$server = $env:COMPUTERNAME.ToLower()
$logtop = "jobs"
$restichome = "C:\Software\restic-backup"
###################################################################

if ( -not (Test-Path -Path "$restichome\${logtop}\${server}" -PathType Container) ) 
{ 
   New-Item -ItemType directory -Path $restichome\${logtop}\${server} 
}

$jsonfilefull = ".\${logtop}\${server}\${DateStr}-restic-backup-full.json"
$jsonfilesummary = ".\${logtop}\${server}\${DateStr}-restic-backup.json"

.\restic.exe backup $restichome Y:\Docs\ --exclude $restichome\$logtop --tag prod --exclude 'omc\**' --quiet --json | Out-File ${jsonfilefull} -encoding ascii

#Get-Content ${jsonfilefull} | .\jq-win64.exe -r 'select(.message_type==\"summary\")' | Out-file ${jsonfilesummary} -encoding ascii
cat ${jsonfilefull} | Select-String -Pattern summary | Out-file ${jsonfilesummary} -encoding ascii -NoNewline
del ${jsonfilefull}

#.\rclone --config rclone.conf copy .\${logtop} s3_ash:restic-backup-logs
.\restic.exe backup $restichome\$logtop --tag logs --quiet

del ${jsonfilesummary}

.\restic forget -q --prune --keep-hourly 5 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 5

restic set tags

In a follow up to previous post https://blog.ls-al.com/restic-create-backup-and-set-tag-with-date-logic here is some code I used to set tags on old snapshots to comply with my new tagging and pruning.

# cat backup-tags-set.sh
#!/bin/bash

create_tag () {
  tag="daily"
  if [ $(date -d "$1" +%a) == "Sun" ]; then tag="weekly" ; fi
  if [ $(date -d "$1" +%d) == "01" ]; then 
   tag="monthly"
   if [ $(date -d "$1" +%b) == "Jan" ]; then
     tag="yearly"
   fi
  fi
}
create_tag
echo "backup policy: " $tag

#source /root/.restic.env
snapshotids=$(restic snapshots -c | egrep -v "ID|snapshots|--" | awk '//{print $1;}')
for snapshotid in $snapshotids
do
  snapdate=$(restic snapshots $snapshotid -c | egrep -v "ID|snapshots|--" | awk '//{print $2;}')
  create_tag $snapdate
  echo "Making a tag for: $snapshotid - $snapdate - $(date -d $snapdate +%a) - $tag"
  restic tag --set $tag $snapshotid
done

# ./backup-tags-set.sh 
backup policy:  daily
Making a tag for: 0b88eefa - 2019-03-27 - Wed - daily
repository 00cde088 opened successfully, password is correct
create exclusive lock for repository
modified tags on 1 snapshots
Making a tag for: d76811ac - 2019-03-27 - Wed - daily
repository 00cde088 opened successfully, password is correct
create exclusive lock for repository
modified tags on 1 snapshots

Restic create backup and set tag with date logic

Also see previous post https://blog.ls-al.com/bash-date-usage-for-naming if you are interested. This post is similar but more specific to restic tagging.

Below is a test script and a test run. At the time of restic backup I create a tag in order to do snapshot forget based on tags.

root@pop-os:/tmp# cat backup-tags.sh 
#!/bin/bash

create_tag () {
  tag="daily"
  if [ $(date +%a) == "Sun" ]; then tag="weekly" ; fi
  if [ $(date +%d) == "01" ]; then 
   tag="monthly"
   if [ $(date +%b) == "Jan" ]; then
     tag="yearly"
   fi
  fi
}
create_tag
echo "backup policy: " $tag

create_tag_unit_test () {
  for i in {1..95}
  do 
      tdate=$(date -d "+$i day")
      tag="daily"
      if [ $(date -d "+$i day" +%a) == "Sun" ]; then tag="weekly" ; fi
      if [ $(date -d "+$i day" +%d) == "01" ]; then
      tag="monthly"
        if [ $(date -d "+$i day" +%b) == "Jan" ]; then
          tag="yearly"
        fi
      fi
  printf "%s - %s - %s | " "$(date -d "+$i day" +%d)" "$(date -d "+$i day" +%a)" "$tag" 
  if [ $(( $i %5 )) -eq 0 ]; then printf "\n"; fi
  done
}
create_tag_unit_test

root@pop-os:/tmp# ./backup-tags.sh 
backup policy:  daily
22 - Fri - daily      | 23 - Sat - daily      | 24 - Sun - weekly     | 25 - Mon - daily      | 26 - Tue - daily      | 
27 - Wed - daily      | 28 - Thu - daily      | 29 - Fri - daily      | 30 - Sat - daily      | 01 - Sun - monthly    | 
02 - Mon - daily      | 03 - Tue - daily      | 04 - Wed - daily      | 05 - Thu - daily      | 06 - Fri - daily      | 
07 - Sat - daily      | 08 - Sun - weekly     | 09 - Mon - daily      | 10 - Tue - daily      | 11 - Wed - daily      | 
12 - Thu - daily      | 13 - Fri - daily      | 14 - Sat - daily      | 15 - Sun - weekly     | 16 - Mon - daily      | 
17 - Tue - daily      | 18 - Wed - daily      | 19 - Thu - daily      | 20 - Fri - daily      | 21 - Sat - daily      | 
22 - Sun - weekly     | 23 - Mon - daily      | 24 - Tue - daily      | 25 - Wed - daily      | 26 - Thu - daily      | 
27 - Fri - daily      | 28 - Sat - daily      | 29 - Sun - weekly     | 30 - Mon - daily      | 31 - Tue - daily      | 
01 - Wed - yearly     | 02 - Thu - daily      | 03 - Fri - daily      | 04 - Sat - daily      | 05 - Sun - weekly     | 
06 - Mon - daily      | 07 - Tue - daily      | 08 - Wed - daily      | 09 - Thu - daily      | 10 - Fri - daily      | 
11 - Sat - daily      | 12 - Sun - weekly     | 13 - Mon - daily      | 14 - Tue - daily      | 15 - Wed - daily      | 
16 - Thu - daily      | 17 - Fri - daily      | 18 - Sat - daily      | 19 - Sun - weekly     | 20 - Mon - daily      | 

Below is the restic backup script setting a tag and then snapshot forget based on the tag.

As always this is NOT tested use at your own risk.

My “policy” is:

  • weekly on Sunday
  • 01 of every month is a monthly except if 01 is also a new year which makes it a yearly
  • everything else is a daily
root@pop-os:~/scripts# cat desktop-restic.sh 
#!/bin/bash
### wake up backup server and restic backup to 3TB ZFS mirror
cd /root/scripts
./wake-backup-server.sh

source /root/.restic.env

## Quick and dirty logic for snapshot tagging
create_tag () {
  tag="daily"
  if [ $(date +%a) == "Sun" ]; then tag="weekly" ; fi
  if [ $(date +%d) == "01" ]; then
   tag="monthly"
   if [ $(date +%b) == "Jan" ]; then
     tag="yearly"
   fi
  fi
}

create_tag
restic backup -q /DATA /ARCHIVE --tag "$tag" --exclude *.vdi --exclude *.iso --exclude *.ova --exclude *.img --exclude *.vmdk

restic forget -q --tag daily --keep-last 7
restic forget -q --tag weekly --keep-last 4
restic forget -q --tag monthly --keep-last 12

if [ "$tag" == "weekly" ]; then
  restic -q prune
fi

sleep 1m
ssh user@192.168.1.250 sudo shutdown now

Quick Backup and Purge

I highly recommend using restic instead of what I am talking about here.

Mostly I am just documenting this for my own reference and this is not a great backup solution by any means. Also note:

  1. This script is creating backups local of course the idea would be to adapt the script to use NFS or even better object storage.
  2. This is just a staring point for example if you would like to write very small datasets (like /etc) and also purge older backups.
  3. Adapt for your own policies I have kind of used a gold policy here(7 daily, 4 weekly, 12 monthly, 5 yearly).
  4. Purging should perhaps rather be done by actual file dates and not by counting.
#!/usr/bin/python
#
#: Script Name  : tarBak.py
#: Author       : Riaan Rossouw
#: Date Created : March 13, 2019
#: Date Updated : March 13, 2019
#: Description  : Python Script to manage tar backups
#: Examples     : tarBak.py -t target -f folders -c
#:              : tarBak.py --target <backup folder> --folders <folders> --create

import optparse, os, glob, sys, re, datetime
import tarfile
import socket

__version__ = '0.9.1'
optdesc = 'This script is used to manage tar backups of files'

parser = optparse.OptionParser(description=optdesc,version=os.path.basename(__file__) + ' ' + __version__)
parser.formatter.max_help_position = 50
parser.add_option('-t', '--target', help='Specify Target', dest='target', action='append')
parser.add_option('-f', '--folders', help='Specify Folders', dest='folders', action='append')
parser.add_option('-c', '--create', help='Create a new backup', dest='create', action='store_true',default=False)
parser.add_option('-p', '--purge', help='Purge older backups per policy', dest='purge', action='store_true',default=False)
parser.add_option('-g', '--group', help='Policy group', dest='group', action='append')
parser.add_option('-l', '--list', help='List backups', dest='listall', action='store_true',default=False)
opts, args = parser.parse_args()

def make_tarfile(output_filename, source_dirs):
  with tarfile.open(output_filename, "w:gz") as tar:
    for source_dir in source_dirs:
      tar.add(source_dir, arcname=os.path.basename(source_dir))

def getBackupType(backup_time_created):
  utc,mt = str(backup_time_created).split('.')
  d = datetime.datetime.strptime(utc, '%Y-%m-%d %H:%M:%S').date()
  dt = d.strftime('%a %d %B %Y')

  if d.weekday() == 6:
    backup_t = 'WEEKLY'
  elif d.day == 1:
    backup_t = 'MONTHLY'
  elif ( (d.day == 1) and (d.mon == 1) ):
    backup_t = 'YEARLY'
  else:
    backup_t = 'DAILY'

  return (backup_t,dt)

def listBackups(target):
  print ("Listing backup files..")

  files = glob.glob(target + "*DAILY*")
  files.sort(key=os.path.getmtime, reverse=True)

  for file in files:
    print file
  
def purgeBackups(target, group):
  print ("Purging backup files..this needs testing and more logic for SILVER and BRONZE policies?")

  files = glob.glob(target + "*.tgz*")
  files.sort(key=os.path.getmtime, reverse=True)
  daily = 0
  weekly = 0
  monthly = 0
  yearly = 0
 
  for file in files:
    comment = ""
    if ( ("DAILY" in file) or ("WEEKLY" in file) or ("MONTHLY" in file) or ("YEARLY" in file) ):
      #t = file.split("-")[0]
      sub = re.search('files-(.+?)-2019', file)
      #print sub
      t = sub.group(1)
    else:
      t = "MANUAL"

    if t == "DAILY":
      comment = "DAILY"
      daily = daily + 1
      if daily > 7:
        comment = comment + " this one is more than 7 deleting"
        os.remove(file)
    elif t == "WEEKLY":
      comment = "Sun"
      weekly = weekly + 1
      if weekly > 4:
        comment = comment + " this one is more than 4 deleting"
        os.remove(file)
    elif t  == "MONTHLY":
      comment = "01"
      monthly = monthly + 1
      if monthly > 12:
       comment = comment + " this one is more than 12 deleting"
       os.remove(file)
    elif t  == "YEARLY":
      comment = "01"
      yearly = yearly + 1
      if yearly > 5:
       comment = comment + " this one is more than 5 deleting"
       os.remove(file)
    else:
      comment = " manual snapshot not purging"
      
    if  "this one " in comment:
      print ('DELETE: {:25}: {:25}'.format(file, comment) )

def createBackup(target, folders, group):
  print ("creating backup of " + str(folders))
  hostname = socket.gethostname()
  creationDate = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.0")
  t,ds = getBackupType(creationDate)
  BackupName = target + "/" + hostname + '-files-' + t + "-" + datetime.datetime.now().strftime("%Y%m%d-%H%MCST") + '.tgz'

  proceed = "SNAPSHOT NOT NEEDED AT THIS TIME PER THE POLICY"
  if ( group == "BRONZE") and ( (t == "MONTHLY") or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif ( group == "SILVER" and (t == "WEEKLY") or (t == "MONTHLY" ) or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif group == "GOLD":
    proceed = "CLEAR TO SNAP" 
  else:
    result = proceed
  
  make_tarfile(BackupName, folders)

def main():
  if opts.target:
    target = opts.target[0]
  else:
    print ("\n\n must specify target folder")
    exit(0)

  if opts.listall:
    listBackups(target)
  else:
    if opts.create:
      if opts.folders:
        folders = opts.folders[0].split(',')
      else:
        print ("\n\n must specify folders")
        exit(0)
      createBackup(target, folders, opts.group[0])

    if opts.purge:
      purgeBackups(target, opts.group[0])

if __name__ == '__main__':
  main()

Example cron entry. Use root if you need to backup files only accessible as root.

$ crontab -l | tail -1
0 5 * * * cd /Src/tarBak/ ; python tarBak.py -t /tmp/MyBackups/ -f '/home/rrosso,/var/spool/syslog' -c 2>&amp;1

Python Tar Backup and Purge

While I was working on a related project to use python to write to cloud object storage and the logic around purging; I jotted down some quick and dirty code here for my reference to build on. Normally I would recommend using the excellent restic program but in this case I am forced to use native API’s.

This serves as a reminder for it is only a very elementary tar plus gzip daily backup and subsequent purging of old backups. Just a test.

#!/usr/bin/python
#
#: Script Name  : tarBak.py
#: Author       : Riaan Rossouw
#: Date Created : March 13, 2019
#: Date Updated : March 13, 2019
#: Description  : Python Script to manage tar backups
#: Examples     : tarBak.py -t target -f 'folder1,folder2' -c -g GOLD
#:              : tarBak.py --target <backup folder> --folders <folders> --create

import optparse, os, glob, sys, re, datetime
import tarfile
import socket

__version__ = '0.9.1'
optdesc = 'This script is used to manage tar backups of files'

parser = optparse.OptionParser(description=optdesc,version=os.path.basename(__file__) + ' ' + __version__)
parser.formatter.max_help_position = 50
parser.add_option('-t', '--target', help='Specify Target', dest='target', action='append')
parser.add_option('-f', '--folders', help='Specify Folders', dest='folders', action='append')
parser.add_option('-c', '--create', help='Create a new backup', dest='create', action='store_true',default=False)
parser.add_option('-p', '--purge', help='Purge older backups per policy', dest='purge', action='store_true',default=False)
parser.add_option('-g', '--group', help='Policy group', dest='group', action='append')
parser.add_option('-l', '--list', help='List backups', dest='listall', action='store_true',default=False)
opts, args = parser.parse_args()

def make_tarfile(output_filename, source_dirs):
  with tarfile.open(output_filename, "w:gz") as tar:
    for source_dir in source_dirs:
      tar.add(source_dir, arcname=os.path.basename(source_dir))

def getBackupType(backup_time_created):
  utc,mt = str(backup_time_created).split('.')
  d = datetime.datetime.strptime(utc, '%Y-%m-%d %H:%M:%S').date()
  dt = d.strftime('%a %d %B %Y')

  if d.weekday() == 6:
    backup_t = 'WEEKLY'
  elif d.day == 1:
    backup_t = 'MONTHLY'
  elif ( (d.day == 1) and (d.mon == 1) ):
    backup_t = 'YEARLY'
  else:
    backup_t = 'DAILY'

  return (backup_t,dt)

def listBackups(target):
  print ("Listing backup files..")

  files = glob.glob(target + "*DAILY*")
  files.sort(key=os.path.getmtime, reverse=True)

  for file in files:
    print file
  
def purgeBackups(target, group):
  print ("Purging backup files..this needs testing and more logic for SILVER and BRONZE policies?")

  files = glob.glob(target + "*.tgz*")
  files.sort(key=os.path.getmtime, reverse=True)
  daily = 0
  weekly = 0
  monthly = 0
  yearly = 0
 
  for file in files:
    comment = ""
    if ( ("DAILY" in file) or ("WEEKLY" in file) or ("MONTHLY" in file) or ("YEARLY" in file) ):
      #t = file.split("-")[0]
      sub = re.search('files-(.+?)-2019', file)
      #print sub
      t = sub.group(1)
    else:
      t = "MANUAL"

    if t == "DAILY":
      comment = "DAILY"
      daily = daily + 1
      if daily > 7:
        comment = comment + " this one is more than 7 deleting"
        os.remove(file)
    elif t == "WEEKLY":
      comment = "Sun"
      weekly = weekly + 1
      if weekly > 4:
        comment = comment + " this one is more than 4 deleting"
        os.remove(file)
    elif t  == "MONTHLY":
      comment = "01"
      monthly = monthly + 1
      if monthly > 12:
       comment = comment + " this one is more than 12 deleting"
       os.remove(file)
    elif t  == "YEARLY":
      comment = "01"
      yearly = yearly + 1
      if yearly > 5:
       comment = comment + " this one is more than 5 deleting"
       os.remove(file)
    else:
      comment = " manual snapshot not purging"
      
    if  "this one " in comment:
      print ('DELETE: {:25}: {:25}'.format(file, comment) )

def createBackup(target, folders, group):
  print ("creating backup of " + str(folders))
  hostname = socket.gethostname()
  creationDate = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.0")
  t,ds = getBackupType(creationDate)
  BackupName = target + "/" + hostname + '-files-' + t + "-" + datetime.datetime.now().strftime("%Y%m%d-%H%MCST") + '.tgz'

  proceed = "SNAPSHOT NOT NEEDED AT THIS TIME PER THE POLICY"
  if ( group == "BRONZE") and ( (t == "MONTHLY") or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif ( group == "SILVER" and (t == "WEEKLY") or (t == "MONTHLY" ) or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif group == "GOLD":
    proceed = "CLEAR TO SNAP" 
  else:
    result = proceed
  
  make_tarfile(BackupName, folders)

def main():
  if opts.target:
    target = opts.target[0]
  else:
    print ("\n\n must specify target folder")
    exit(0)

  if opts.listall:
    listBackups(target)
  else:
    if opts.create:
      if opts.folders:
        folders = opts.folders[0].split(',')
      else:
        print ("\n\n must specify folders")
        exit(0)
      createBackup(target, folders, opts.group[0])

    if opts.purge:
      purgeBackups(target, opts.group[0])

if __name__ == '__main__':
  main()

And running it like this:

$ python tarBak.py -t /tmp/MyBackups/ -f '/home/rrosso,/var/log/syslog' -g GOLD -c
creating backup of ['/home/rrosso', '/var/log/syslog']

$ python tarBak.py -t /tmp/MyBackups/ -p -g GOLD
Purging backup files..this needs testing and more logic for SILVER and BRONZE policies?
DELETE: /tmp/MyBackups/xubuntu32-files-DAILY-20190313-1420CST.tgz: DAILY this one is more than 7 deleting
$ crontab -l | tail -1
0 5 * * * cd /Src/tarBak/ ; python tarBak.py -t /MyBackups/ -f '/home/rrosso,/var/spool/syslog' -c -p -g GOLD 2>&amp;1

Object Storage with Restic and Rclone

I have been playing around with some options to utilize Object Storage for backups. Since I am working on Oracle Cloud Infrastructure (OCI) I am doing my POC using the OCI Object Storage. OCI object storage does have Swift and S3 Compatibility API’s to interface with. Of course if you want commercial backups many of them can use object storage as back-ends now so that would be the correct answer. If your needs does not warrant commercial backups solutions you can try several things. A few options I played with.

1. Bareos server/client with the object storage droplet. Not working reliably. Too experimental with droplet?
2. Rclone and using tar to pipe with rclone’s rcat feature. This works well but is not a backup solution as in incrementals etc.
3. Duplicati. In my case using rclone as connection since S3 interface on OCI did not work.
4. Dupliciti. Could not get this one to work to S3 interface on OCI.
5. Restic. In my case using rclone as connection since S3 interface on OCI did not work.

So far duplicati was not bad but had some bugs. It is beta software so probably should expect problems. Restic is doing a good job so far and I show a recipe of my POC below:

Out of scope is setting up rclone, rclone.conf. Make sure you test that rclone is accessing your bucket first.

Restic binary

# wget https://github.com/restic/restic/releases/download/v0.9.1/restic_0.9.1_linux_amd64.bz2
2018-08-03 10:25:10 (3.22 MB/s) - ‘restic_0.9.1_linux_amd64.bz2’ saved [3786622/3786622]
# bunzip2 restic_0.9.1_linux_amd64.bz2 
# mv restic_0.9.1_linux_amd64 /usr/local/bin/
# chmod +x /usr/local/bin/restic_0.9.1_linux_amd64 
# mv /usr/local/bin/restic_0.9.1_linux_amd64 /usr/local/bin/restic
# /usr/local/bin/restic version
restic 0.9.1 compiled with go1.10.3 on linux/amd64

Initialize repo

# rclone ls s3_servers_phoenix:oci02a
# export RESTIC_PASSWORD="WRHYEjblahblah0VWq5qM"
# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a init
created restic repository 2bcf4f5864 at rclone:s3_servers_phoenix:oci02a

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

# rclone ls s3_servers_phoenix:oci02a
      155 config
      458 keys/530a67c4674b9abf6dcc9e7b75c6b319187cb8c3ed91e6db992a3e2cb862af63

Run a backup

# time /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a backup /opt/applmgr/12.2
repository 2bcf4f58 opened successfully, password is correct

Files:       1200934 new,     0 changed,     0 unmodified
Dirs:            2 new,     0 changed,     0 unmodified
Added:      37.334 GiB

processed 1200934 files, 86.311 GiB in 1:31:40
snapshot af4d5598 saved

real	91m40.824s
user	23m4.072s
sys	7m23.715s

# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a snapshots
repository 2bcf4f58 opened successfully, password is correct
ID        Date                 Host              Tags        Directory
----------------------------------------------------------------------
af4d5598  2018-08-03 10:35:45  oci02a              /opt/applmgr/12.2
----------------------------------------------------------------------
1 snapshots

Run second backup

# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a backup /opt/applmgr/12.2
repository 2bcf4f58 opened successfully, password is correct

Files:           0 new,     0 changed, 1200934 unmodified
Dirs:            0 new,     0 changed,     2 unmodified
Added:      0 B  

processed 1200934 files, 86.311 GiB in 47:46
snapshot a158688a saved

Example cron entry

# crontab -l
05 * * * * /usr/local/bin/restic -r servers_phoenix:oci02a backup -q /usr; /usr/local/bin/restic -r servers_phoenix:oci02a forget -q --prune --keep-hourly 2 --keep-daily 7

Bash Date Usage For Naming

I am recording some scripting I used to create backup classification/retention naming. It can be simplified into one function easily but I kept it like this so I can copy and paste easier which function I need. Script is pretty self explanatory but basically it takes today’s date and name my eventual backup file name based on some logic.

# cat test_class.sh 
HOSTNAME=$(hostname -s)
BACKUP_CLASSIFICATION="UNCLASSIFIED"

function retention_date() {
  MM=`date -d ${1} +%m`
  DD=`date -d ${1} +%d`
  DAY=`date -d ${1} +%u`

  if [ $DD == 01 ]
  then
     if [ $MM == 01 ]
     then
       BACKUP_CLASSIFICATION="YEARLY"
     else
       BACKUP_CLASSIFICATION="MONTHLY"
     fi
  else
    if (($DAY == 7)); then
     BACKUP_CLASSIFICATION="WEEKLY"
    else
     BACKUP_CLASSIFICATION="DAILY"
    fi
  fi

}

function retention_today() {
  MM=`date '+%m'`
  DD=`date '+%d'`
  DAY=`date +%u`
  
  if [ $DD == 01 ]
  then
     if [ $MM == 01 ]
     then
       BACKUP_CLASSIFICATION="YEARLY"
     else
       BACKUP_CLASSIFICATION="MONTHLY"
     fi
  else
    if (($DAY == 7)); then
     BACKUP_CLASSIFICATION="WEEKLY"
    else
     BACKUP_CLASSIFICATION="DAILY"
    fi
  fi

}

echo "TEST TODAY"
DATE=`date +%Y-%m-%d`
retention_today
echo $HOSTNAME-$BACKUP_CLASSIFICATION-$DATE
  
echo 
echo "TEST SPECIFIC DATES"
testD=(
 '2018-01-01'
 '2018-02-02'
 '2018-03-01'
 '2018-02-06'
 '2018-07-14'
 '2018-07-15'
)

for D in "${testD[@]}"
do
  DATE=`date -d ${D} +%Y-%m-%d`
  retention_date $D
  echo $HOSTNAME-$BACKUP_CLASSIFICATION-$DATE
done

Run and output.

# ./test_class.sh 
TEST TODAY
oci04-DAILY-2018-07-20

TEST SPECIFIC DATES
oci04-YEARLY-2018-01-01
oci04-DAILY-2018-02-02
oci04-MONTHLY-2018-03-01
oci04-DAILY-2018-02-06
oci04-DAILY-2018-07-14
oci04-WEEKLY-2018-07-15

Borg Backup and Rclone to Object Storage

I recently used Borg for protecting some critical files and jotting down some notes here.

Borg exist in many distribution repos so easy to install. When not in a repo they have pre-compiled binaries that can easily be added to your Linux OS.

Pick a server to act like your backup server (repository). Pretty much any Linux server where you can direct your client to send their backups to. You want to make your backup folder big enough of course.

Using Borg backup across SSH with sshkeys
https://opensource.com/article/17/10/backing-your-machines-borg

# yum install borgbackup
# useradd borg
# passwd borg
# sudo su - borg 
$ mkdir /mnt/backups
$ cat /home/borg/.ssh/authorized_keys
ssh-rsa AAAAB3N[..]6N/Yw== root@server01
$ borg init /mnt/backups/repo1 -e none

 **** CLIENT server01 with single binary(no repo for borgbackup on this server)

$ sudo su - root
# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/borg_key
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/borg_key.
Your public key has been saved in /root/.ssh/borg_key.pub.

# ./backup.sh 
Warning: Attempting to access a previously unknown unencrypted repository!
Do you want to continue? [yN] y
Synchronizing chunks cache...
Archives: 0, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 0.
Done.
------------------------------------------------------------------------------
Archive name: server01-2018-03-29
Archive fingerprint: 79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3
Time (start): Thu, 2018-03-29 19:32:45
Time (end):   Thu, 2018-03-29 19:32:47
Duration: 1.36 seconds
Number of files: 1069
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:               42.29 MB             15.41 MB             11.84 MB
All archives:               42.29 MB             15.41 MB             11.84 MB

                       Unique chunks         Total chunks
Chunk index:                    1023                 1059
------------------------------------------------------------------------------
Keeping archive: server01-2018-03-29                     Thu, 2018-03-29 19:32:45 [79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3]

*** RECOVER test. Done on BORG server directly but will test from client directly also. May need BORG_RSH variable.

$ borg list repo1
server01-2018-03-29                     Thu, 2018-03-29 19:32:45 [79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3]

$ borg list repo1::server01-2018-03-29 | less

$ cd /tmp
$ borg extract /mnt/backups/repo1::server01-2018-03-29  etc/hosts

$ ls -l etc/hosts 
-rw-r--r--. 1 borg borg 389 Mar 26 15:50 etc/hosts

APPENDIX: client backup.sh cron and source

# crontab -l
0 0 * * * /root/scripts/backup.sh &amp;gt; /dev/null 2&amp;gt;&amp;amp;1

# sudo su - root
# cd scripts/
# cat backup.sh 
#!/usr/bin/env bash

##
## Set environment variables
##

## if you don't use the standard SSH key,
## you have to specify the path to the key like this
export BORG_RSH='ssh -i /root/.ssh/borg_key'

## You can save your borg passphrase in an environment
## variable, so you don't need to type it in when using borg
# export BORG_PASSPHRASE="top_secret_passphrase"

##
## Set some variables
##

LOG="/var/log/borg/backup.log"
BACKUP_USER="borg"
REPOSITORY="ssh://${BACKUP_USER}@10.1.1.2/mnt/backups/repo1"

#export BORG_PASSCOMMAND=''

#Bail if borg is already running, maybe previous run didn't finish
if pidof -x borg &amp;gt;/dev/null; then
    echo "Backup already running"
    exit
fi

##
## Output to a logfile
##

exec &amp;gt; &amp;gt;(tee -i ${LOG})
exec 2&amp;gt;&amp;amp;1

echo "###### Backup started: $(date) ######"

##
## At this place you could perform different tasks
## that will take place before the backup, e.g.
##
## - Create a list of installed software
## - Create a database dump
##

##
## Transfer the files into the repository.
## In this example the folders root, etc,
## var/www and home will be saved.
## In addition you find a list of excludes that should not
## be in a backup and are excluded by default.
##

echo "Transfer files ..."
/usr/local/bin/borg create -v --stats                   \
    $REPOSITORY::'{hostname}-{now:%Y-%m-%d}'    \
    /root                                \
    /etc                                 \
    /u01                                 \
    /home                                \
    --exclude /dev                       \
    --exclude /proc                      \
    --exclude /sys                       \
    --exclude /var/run                   \
    --exclude /run                       \
    --exclude /lost+found                \
    --exclude /mnt                       \
    --exclude /var/lib/lxcfs


# Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly
# archives of THIS machine. The '{hostname}-' prefix is very important to
# limit prune's operation to this machine's archives and not apply to
# other machine's archives also.
/usr/local/bin/borg prune -v --list $REPOSITORY --prefix '{hostname}-' \
    --keep-daily=7 --keep-weekly=4 --keep-monthly=6

echo "###### Backup ended: $(date) ######"

In addition to using Borg this test was also about pushing backups to Oracle OCI object storage so below is some steps I followed. I had to use the newest rclone because v1.36 had weird issues with the Oracle OCI S3 interface.

# curl https://rclone.org/install.sh | sudo bash

# df -h | grep borg
/dev/mapper/vg01-vg01--lv01  980G  7.3G  973G   1% /mnt/backups

# sudo su - borg

[$ cat ~/.config/rclone/rclone.conf 
[s3_backups]
type = s3
env_auth = false
access_key_id = ocid1.credential.oc1..aaaa[snipped]
secret_access_key = KJFevw6s=
region = us-ashburn-1
endpoint = [snipped].compat.objectstorage.us-ashburn-1.oraclecloud.com
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 

$ rclone  lsd s3_backups: 
          -1 2018-03-27 21:07:11        -1 backups
          -1 2018-03-29 13:39:42        -1 repo1
          -1 2018-03-26 22:23:35        -1 terraform
          -1 2018-03-27 14:34:55        -1 terraform-src

Initial sync. Note I am using sync but be warned you need to figure out if you want to use copy or sync. As far as I know sync may delete not only on target but also on source when syncing.

$ /usr/bin/rclone -v sync /mnt/borg/repo1 s3_backups:repo1
2018/03/29 22:37:00 INFO  : S3 bucket repo1: Modify window is 1ns
2018/03/29 22:37:00 INFO  : README: Copied (replaced existing)
2018/03/29 22:37:00 INFO  : hints.38: Copied (new)
2018/03/29 22:37:00 INFO  : integrity.38: Copied (new)
2018/03/29 22:37:00 INFO  : data/0/17: Copied (new)
2018/03/29 22:37:00 INFO  : config: Copied (replaced existing)
2018/03/29 22:37:00 INFO  : data/0/18: Copied (new)
2018/03/29 22:37:00 INFO  : index.38: Copied (new)
2018/03/29 22:37:59 INFO  : data/0/24: Copied (new)
2018/03/29 22:38:00 INFO  : 
Transferred:   1.955 GBytes (33.361 MBytes/s)
Errors:                 0
Checks:                 2
Transferred:            8
Elapsed time:        1m0s
Transferring:
 *                                     data/0/21: 100% /501.284M, 16.383M/s, 0s
 *                                     data/0/22: 98% /500.855M, 18.072M/s, 0s
 *                                     data/0/23: 100% /500.951M, 14.231M/s, 0s
 *                                     data/0/25:  0% /501.379M, 0/s, -

2018/03/29 22:38:00 INFO  : data/0/22: Copied (new)
2018/03/29 22:38:00 INFO  : data/0/23: Copied (new)
2018/03/29 22:38:01 INFO  : data/0/21: Copied (new)
2018/03/29 22:38:57 INFO  : data/0/25: Copied (new)
2018/03/29 22:38:58 INFO  : data/0/27: Copied (new)
2018/03/29 22:38:59 INFO  : data/0/26: Copied (new)
2018/03/29 22:38:59 INFO  : data/0/28: Copied (new)
2018/03/29 22:39:00 INFO  : 
Transferred:   3.919 GBytes (33.438 MBytes/s)
Errors:                 0
Checks:                 2
Transferred:           15
Elapsed time:        2m0s
Transferring:
 *                                     data/0/29:  0% /500.335M, 0/s, -
 *                                     data/0/30:  0% /500.294M, 0/s, -
 *                                     data/0/31:  0% /500.393M, 0/s, -
 *                                     data/0/32:  0% /500.264M, 0/s, -

2018/03/29 22:39:45 INFO  : data/0/29: Copied (new)
2018/03/29 22:39:52 INFO  : data/0/30: Copied (new)
2018/03/29 22:39:52 INFO  : S3 bucket repo1: Waiting for checks to finish
2018/03/29 22:39:55 INFO  : data/0/32: Copied (new)
2018/03/29 22:39:55 INFO  : S3 bucket repo1: Waiting for transfers to finish
2018/03/29 22:39:56 INFO  : data/0/31: Copied (new)
2018/03/29 22:39:57 INFO  : data/0/36: Copied (new)
2018/03/29 22:39:57 INFO  : data/0/37: Copied (new)
2018/03/29 22:39:57 INFO  : data/0/38: Copied (new)
2018/03/29 22:39:58 INFO  : data/0/1: Copied (replaced existing)
2018/03/29 22:40:00 INFO  : 
Transferred:   5.874 GBytes (33.413 MBytes/s)
Errors:                 0
Checks:                 3
Transferred:           23
Elapsed time:        3m0s
Transferring:
 *                                     data/0/33:  0% /500.895M, 0/s, -
 *                                     data/0/34:  0% /501.276M, 0/s, -
 *                                     data/0/35:  0% /346.645M, 0/s, -

2018/03/29 22:40:25 INFO  : data/0/35: Copied (new)
2018/03/29 22:40:28 INFO  : data/0/33: Copied (new)
2018/03/29 22:40:30 INFO  : data/0/34: Copied (new)
2018/03/29 22:40:30 INFO  : Waiting for deletions to finish
2018/03/29 22:40:30 INFO  : data/0/3: Deleted
2018/03/29 22:40:30 INFO  : index.3: Deleted
2018/03/29 22:40:30 INFO  : hints.3: Deleted
2018/03/29 22:40:30 INFO  : 
Transferred:   7.191 GBytes (34.943 MBytes/s)
Errors:                 0
Checks:                 6
Transferred:           26
Elapsed time:     3m30.7s

Run another sync showing nothing to do.

$ /usr/bin/rclone -v sync /mnt/borg/repo1 s3_backups:repo1
2018/03/29 22:43:13 INFO  : S3 bucket repo1: Modify window is 1ns
2018/03/29 22:43:13 INFO  : S3 bucket repo1: Waiting for checks to finish
2018/03/29 22:43:13 INFO  : S3 bucket repo1: Waiting for transfers to finish
2018/03/29 22:43:13 INFO  : Waiting for deletions to finish
2018/03/29 22:43:13 INFO  : 
Transferred:      0 Bytes (0 Bytes/s)
Errors:                 0
Checks:                26
Transferred:            0
Elapsed time:       100ms

Test script and check log

$ cd scripts/
$ ./s3_backup.sh 
$ more ../s3_backups.log 
2018/03/29 22:43:56 INFO  : S3 bucket repo1: Modify window is 1ns
2018/03/29 22:43:56 INFO  : S3 bucket repo1: Waiting for checks to finish
2018/03/29 22:43:56 INFO  : S3 bucket repo1: Waiting for transfers to finish
2018/03/29 22:43:56 INFO  : Waiting for deletions to finish
2018/03/29 22:43:56 INFO  : 
Transferred:      0 Bytes (0 Bytes/s)
Errors:                 0
Checks:                26
Transferred:            0
Elapsed time:       100ms

Check size used on object storage.

$ rclone size s3_backups:repo1
Total objects: 26
Total size: 7.191 GBytes (7721115523 Bytes)

APPENDIX: s3_backup.sh crontab and source

$ crontab -l
50 23 * * * /home/borg/scripts/s3_backup.sh

$ cat s3_backup.sh 
#!/bin/bash
set -e

#repos=( repo1 repo2 repo3 )
repos=( repo1 )

#Bail if rclone is already running, maybe previous run didn't finish
if pidof -x rclone &amp;gt;/dev/null; then
    echo "Process already running"
    exit
fi

for i in "${repos[@]}"
do
    #Lets see how much space is used by directory to back up
    #if directory is gone, or has gotten small, we will exit
    space=`du -s /mnt/backups/$i|awk '{print $1}'`

    if (( $space &amp;lt; 3450000 )); then echo "EXITING - not enough space used in $i" exit fi /usr/bin/rclone -v sync /mnt/backups/$i s3_backups:$i &amp;gt;&amp;gt; /home/borg/s3_backups.log 2&amp;gt;&amp;amp;1
done

Using Unix TAR for data moves

I haven’t tried this yet in a real world example. In some instances you might be moving large amounts of data and network is not an option (speed), you might have incompatible file systems so you can’t just re-use disk (LUN), and more traditional backup devices like tapes are not available.

Tar to raw disk is one option.

Tar without using multiple volumes:

$ md5sum /media/sf_DATA/isos/V36284-01.iso
aeb36d1f087a1fbf5e62723d2f7e0b9e  /media/sf_DATA/isos/V36284-01.iso

# tar cpf /dev/sdb /media/sf_DATA/isos/V36284-01.iso
tar: Removing leading `/' from member names

# tar tvf /dev/sdb
-rwxrwx--- root/vboxsf 252258304 2013-05-01 10:43 media/sf_DATA/isos/V36284-01.iso

# tar rpf /dev/sdb /media/sf_DATA/isos/FreeBSD-Live.iso
tar: Removing leading `/' from member names

# tar tvf /dev/sdb
-rwxrwx--- root/vboxsf 252258304 2013-05-01 10:43 media/sf_DATA/isos/V36284-01.iso
-rwxrwx--- root/vboxsf 179044352 2013-05-14 23:10 media/sf_DATA/isos/FreeBSD-Live.iso

# md5sum media/sf_DATA/isos/V36284-01.iso
aeb36d1f087a1fbf5e62723d2f7e0b9e  media/sf_DATA/isos/V36284-01.iso

Tar with using multiple volumes:

# tar -cMf /dev/sdb V36284-01.iso fd11src.iso
Prepare volume #2 for `/dev/sdb' and hit return: n /dev/sdc
Prepare volume #3 for `/dev/sdc' and hit return: n /dev/sdd

# tar -tvMf /dev/sdb
-rwxrwx--- root/vboxsf 252258304 2013-05-01 10:43 V36284-01.iso
Prepare volume #2 for `/dev/sdb' and hit return: n /dev/sdc
Prepare volume #3 for `/dev/sdc' and hit return: n /dev/sdd
-rwxrwx--- root/vboxsf  40828928 2013-04-22 07:50 fd11src.iso

# tar -xvMf /dev/sdb
V36284-01.iso
Prepare volume #2 for `/dev/sdb' and hit return: n /dev/sdc
Prepare volume #3 for `/dev/sdc' and hit return: n /dev/sdd
fd11src.iso

# ls -lh V36284-01.iso fd11src.iso
-rwxrwx--- 1 root vboxsf  39M Apr 22 07:50 fd11src.iso
-rwxrwx--- 1 root vboxsf 241M May  1 10:43 V36284-01.iso

Bacula Cheatsheet

Just a few useful commands. Note that bconsole commands can be scripted by echoing commands through a pipe to bconsole.  This is very helpful as you can see.

Check bacula job progress:

root@bcla001:~# echo "status client=dracula-fd" | bconsole | grep -i file
    Files=31,011 Bytes=507,017,314,659 Bytes/sec=13,158,002 Errors=0
    Files Examined=31,011
    Processing file: /raidvol/home/bob/o_Data/770/9V904G8FMDW6X4473B3J8Q8H1B.vxml
 JobId  Level    Files      Bytes   Status   Finished        Name

Check bacula running jobs:

root@bcla001:~# echo "list jobs" | bconsole | grep "|"
| JobId | Name | StartTime           | Type | Level | JobFiles | JobBytes | JobStatus |
| 6     | Test | 2012-09-07 17:58:29 | B    | F     | 0        | 0        | R         |

Check bacula volumes for specific Pool:

root@bcla001:~# echo "list volumes Pool=FullTest" | bconsole | grep "|"
| MediaId | VolumeName   | VolStatus | Enabled | VolBytes      | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType     | LastWritten         |
| 1       | FullTest0001 | Full      | 1       | 1862750776320 | 1863     | 15552000     | 1       | 0    | 0         | Ultrium5-SCSI | 2012-09-08 08:17:19 |
| 2       | FullTest0002 | Used      | 1       | 112992768000  | 113      | 15552000     | 1       | 0    | 0         | Ultrium5-SCSI | 2012-09-08 11:02:22 |

Flag volume full:

*update volume > Volume Status > Default > VOL034 > Full > Done

A better cheat sheet at this link: http://workaround.org/bacula-cheatsheet