Restic PowerShell Script

Just for my reference my quick and dirty Windows backup script for restic. I left some of the rclone, jq lines in but commented out. Depending on how to handle logging it may be helpful. In one project I pushed the summary json output to a S3 bucket. In this version I ran a restic job to backup the log since the initial job won’t contain the log still being generated of course.

For this way of logging ie keeping the logs in restic not rclone/jq/buckets potentially when reporting you will dump the log from the latest repo like so:

$ restic -r s3:s3.amazonaws.com/restic-windows-backup-poc.<domain>.com dump latest /C/Software/restic-backup/jobs/desktop-l0qamrb/2020-03-10-1302-restic-backup.json | jq
 {
   "message_type": "summary",
   "files_new": 0,
   "files_changed": 1,
   "files_unmodified": 12,
   "dirs_new": 0,
   "dirs_changed": 2,
   "dirs_unmodified": 3,
   "data_blobs": 1,
   "tree_blobs": 3,
   "data_added": 2839,
   "total_files_processed": 13,
   "total_bytes_processed": 30386991,
   "total_duration": 1.0223828,
   "snapshot_id": "e9531e66"
 }

Here is restic-backup.ps1. Note the hidden file for the restic variables and encryption key of course. And I am doing forget/prune here but that should be done weekly.

##################################################################
#Custom variables
. .\restic-keys.ps1
$DateStr = $(get-date -f yyyy-MM-dd-HHmm)
$server = $env:COMPUTERNAME.ToLower()
$logtop = "jobs"
$restichome = "C:\Software\restic-backup"
###################################################################

if ( -not (Test-Path -Path "$restichome\${logtop}\${server}" -PathType Container) ) 
{ 
   New-Item -ItemType directory -Path $restichome\${logtop}\${server} 
}

$jsonfilefull = ".\${logtop}\${server}\${DateStr}-restic-backup-full.json"
$jsonfilesummary = ".\${logtop}\${server}\${DateStr}-restic-backup.json"

.\restic.exe backup $restichome Y:\Docs\ --exclude $restichome\$logtop --tag prod --exclude 'omc\**' --quiet --json | Out-File ${jsonfilefull} -encoding ascii

#Get-Content ${jsonfilefull} | .\jq-win64.exe -r 'select(.message_type==\"summary\")' | Out-file ${jsonfilesummary} -encoding ascii
cat ${jsonfilefull} | Select-String -Pattern summary | Out-file ${jsonfilesummary} -encoding ascii -NoNewline
del ${jsonfilefull}

#.\rclone --config rclone.conf copy .\${logtop} s3_ash:restic-backup-logs
.\restic.exe backup $restichome\$logtop --tag logs --quiet

del ${jsonfilesummary}

.\restic forget -q --prune --keep-hourly 5 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 5

Restic recover OS

My test to recover an Ubuntu server OS from a backup.

Note the following:

  • I used Ubuntu 20.04 (focal) which is still beta at the time of this POC. In theory Ubuntu 18.04 should work the same or better.
  • For an OS recovery I documented the backup elsewhere. It was something like this for me and yours will vary of course:
    restic –exclude={/dev/*,/media,/mnt/*,/proc/*,/run/*,/sys/*,/tmp/*,/var/tmp/*,/swapfile} backup / /dev/{console,null}
  • For the partition recovery I saved it on the source server to a file for easy copy/paste during the recovery: sfdisk -d > /tmp/partition-table
  • I tested restic repo’s with both sftp and AWS s3.
  • Used a Virtualbox VM named u20.04-restic-os-restored. Made the recover server disk 15G (5G larger than the original 10G where backup was done)
  • My POC consist of a very simple disk layout ie one ext4 partition only. It was just the default install from this Ubuntu 20.04 desktop install. Complicated boot disk layouts may be very different. I am not interested in recovering servers with complicated OS disk layouts. To me that does not fit with modern infrastructure and concepts like auto scaling. Boot disks should be lean and easily recovered/provisioned through scripting; and configuration applied with configuration management tools.
  • boot liveCD, set ubuntu user password and install and start ssh so we can ssh into and easier to copy/paste etc.
  • Abbreviated commands (removed most output to shorten)
$ ssh ubuntu@192.168.1.160
$ sudo -i

# export AWS_ACCESS_KEY_ID=<secret..>
# export AWS_SECRET_ACCESS_KEY=<secret..>
# export RESTIC_PASSWORD=<secret..>
# export RESTIC_REPOSITORY="sftp:rr@192.168.1.111:/ARCHIVE/restic-os-restore-poc"
# cd /usr/local/bin/
# wget https://github.com/restic/restic/releases/download/v0.9.6/restic_0.9.6_linux_amd64.bz2
# bzip2 -d restic_0.9.6_linux_amd64.bz2 
# mv restic_0.9.6_linux_amd64 restic
# chmod +x restic 
# mkdir /mnt/restore
# sfdisk -d /dev/sda < partition-table
# mkfs.ext4 /dev/sda1
# mkdir /mnt/restore/
# mount /dev/sda1 /mnt/restore/
# /usr/local/bin/restic snapshots
# time /usr/local/bin/restic restore latest -t /mnt/restore --exclude '/etc/fstab' --exclude '/etc/crypttab' --exclude '/boot/grub/grub.cfg' --exclude '/etc/default/grub'

# mount --bind /dev /mnt/restore/dev
# mount -t sysfs sys /mnt/restore/sys
# mount -t proc proc /mnt/restore/proc
# mount -t devpts devpts /mnt/restore/dev/pts
# mount -t tmpfs tmp /mnt/restore/tmp
# mount --rbind /run /mnt/restore/run
# mount -t tmpfs tmp /mnt/restore/tmp

# chroot /mnt/restore /bin/bash
# lsblk | grep sda
# grub-install /dev/sda
# update-grub
# blkid | grep sda

# UUID=$(blkid | grep sda | cut -d' ' -f2 | cut -d\" -f2)
# echo "$UUID / ext4    errors=remount-ro 0       1" > /etc/fstab

# sync
# exit
# init 0

Note:

New server booted and worked but graphics (GNOME login) login for ubuntu account stalled on login. This fixed it: dconf reset -f /org/gnome/

My restic backup command works but just for reference since restic has no include flag rsync seem to have a better exclude/include functionality syntax like this: rsync –include=/dev/{console,null} –exclude={/dev/,/proc/,/sys/,/tmp/,/run/,/mnt/,/media/*,/lost+found}

Formulas for bytes and duration

Capturing a couple formulas for future reference. I was using restic backup stats and wanted to convert the time a job ran into hours and minutes. Also convert bytes process to TB.

bytes to TB example:
total_bytes_processed: 5773378 / 1000000000000 == 1.502888851889

duration to HH:MM example using libre calc:
total_duration: 15870.197288027 / 86400 and then Format Cell > Time > HH:MM:SS will show 04:24:30

Bash Read Json Config File

Couple of things here. I wanted to do some restic scripts but at the same time use a configuration file. The restic developers is working on this functionality for restic and possibly using TOML.

Meanwhile I was trying json since I can definitely use bash/json for other applications. And as you know bash is not great at this kind of thing specifically arrays etc. So this example reads a configuration file and process the json. To further complicate things my json typically need arrays or lists of values as in the restic example you can see for folders, excludes and tags.

You will also note a unique problem with bash. When using while loops with a pipe into the while a subshell is used and you can’t use variable in your main shell. So my appending to a variable inside the while loop does not produce any strings. In bash 4.2 you can use “shopt -s latpipe” to get around this. Apparently this is not a problem with ksh.

This is not a working restic script. This is a script to read a configuration file. It just happen to be for something I am going to do with restic.

Example json config file.

$ cat restic-jobs.json 
{ "Jobs":
  [
   {
    "jobname": "aws-s3",
    "repo": "sftp:myuser@192.168.1.112:/TANK/RESTIC-REPO",
    "sets":
      [
       {
        "folders": [ "/DATA" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "data","biz" ]
       },
       {
        "folders": [ "/ARCHIVE" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "archive","biz" ]
       }
      ],
      "quiet": "true"
    },
    {
     "jobname": "azure-onedrive",
     "repo":  "rclone:azure-onedrive:restic-backups",
     "sets":
       [
       {
        "folders": [ "/DATA" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "data","biz" ]
       },
       {
        "folders": [ "/ARCHIVE" ],
        "excludes": [ ".snapshots","temp"],
        "tags": [ "archive","biz" ]
       }
      ],
     "quiet": "true"
    }
  ]
} 

Script details.

cat restic-jobs.sh 
#!/bin/bash
#v0.9.1

JOB="aws-s3"
eval "$(jq --arg JOB ${JOB} -r '.Jobs[] | select(.jobname==$JOB) | del(.sets) | to_entries[] | .key + "=\"" + .value + "\""' restic-jobs.json)"
if [[ "$jobname" == "" ]]; then
  echo "no job found in config: " $JOB
  exit
fi

echo "found: $jobname"

#sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB) | .sets | .[]' restic-jobs.json )

echo

sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB)' restic-jobs.json)

backup_jobs=()
## need this for bash issue with variables and pipe subshell
shopt -s lastpipe

echo $sets | jq -rc '.sets[]' | while IFS='' read set;do
    cmd_line="restic backup -q --json "

    folders=$(echo "$set" | jq -r '.folders | .[]')
    for st in $folders; do cmd_line+=" $st"; done
    excludes=$(echo "$set" | jq -r '.excludes | .[]')
    for st in $excludes; do cmd_line+=" --exclude $st"; done
    tags=$(echo "$set" | jq -r '.tags | .[]')
    for st in $tags; do cmd_line+=" --tag $st"; done

    backup_jobs+=("$cmd_line")
done

for i in "${backup_jobs[@]}"; do
  echo "cmd_line: $i"
done

Script run example. Note I am not passing the job name just hard code at the top for my test.

./restic-jobs.sh 
found: iqonda-aws-s3

cmd_line: restic backup -q --json  /DATA --exclude .snapshots --exclude temp --tag iqonda --tag biz
cmd_line: restic backup -q --json  /ARCHIVE --exclude .snapshots --exclude temp --tag iqonda --tag biz

restic set tags

In a follow up to previous post https://blog.ls-al.com/restic-create-backup-and-set-tag-with-date-logic here is some code I used to set tags on old snapshots to comply with my new tagging and pruning.

# cat backup-tags-set.sh
#!/bin/bash

create_tag () {
  tag="daily"
  if [ $(date -d "$1" +%a) == "Sun" ]; then tag="weekly" ; fi
  if [ $(date -d "$1" +%d) == "01" ]; then 
   tag="monthly"
   if [ $(date -d "$1" +%b) == "Jan" ]; then
     tag="yearly"
   fi
  fi
}
create_tag
echo "backup policy: " $tag

#source /root/.restic.env
snapshotids=$(restic snapshots -c | egrep -v "ID|snapshots|--" | awk '//{print $1;}')
for snapshotid in $snapshotids
do
  snapdate=$(restic snapshots $snapshotid -c | egrep -v "ID|snapshots|--" | awk '//{print $2;}')
  create_tag $snapdate
  echo "Making a tag for: $snapshotid - $snapdate - $(date -d $snapdate +%a) - $tag"
  restic tag --set $tag $snapshotid
done

# ./backup-tags-set.sh 
backup policy:  daily
Making a tag for: 0b88eefa - 2019-03-27 - Wed - daily
repository 00cde088 opened successfully, password is correct
create exclusive lock for repository
modified tags on 1 snapshots
Making a tag for: d76811ac - 2019-03-27 - Wed - daily
repository 00cde088 opened successfully, password is correct
create exclusive lock for repository
modified tags on 1 snapshots

restic option to configure S3 region

If you find yourself relying on restic using rclone to talk to non-default regions you may want to check out the just released restic version 0.9.6. To me it appears to be fixed when working with Oracle Cloud Infrastructure (OCI) object storage. Below shows a test accessing Phoenix endpoint with the new -o option.

# restic -r s3:<tenancy_name>.compat.objectstorage.us-phoenix-1.oraclecloud.com/restic-backups snapshots -o s3.region="us-phoenix-1"
repository <....> opened successfully, password is correct
ID        Time                 Host                          Tags        Paths
----------------------------------------------------------------------------------------
f23784fd  2019-10-27 05:10:02  host01.domain.com  mytag     /etc

Restic create backup and set tag with date logic

Also see previous post https://blog.ls-al.com/bash-date-usage-for-naming if you are interested. This post is similar but more specific to restic tagging.

Below is a test script and a test run. At the time of restic backup I create a tag in order to do snapshot forget based on tags.

root@pop-os:/tmp# cat backup-tags.sh 
#!/bin/bash

create_tag () {
  tag="daily"
  if [ $(date +%a) == "Sun" ]; then tag="weekly" ; fi
  if [ $(date +%d) == "01" ]; then 
   tag="monthly"
   if [ $(date +%b) == "Jan" ]; then
     tag="yearly"
   fi
  fi
}
create_tag
echo "backup policy: " $tag

create_tag_unit_test () {
  for i in {1..95}
  do 
      tdate=$(date -d "+$i day")
      tag="daily"
      if [ $(date -d "+$i day" +%a) == "Sun" ]; then tag="weekly" ; fi
      if [ $(date -d "+$i day" +%d) == "01" ]; then
      tag="monthly"
        if [ $(date -d "+$i day" +%b) == "Jan" ]; then
          tag="yearly"
        fi
      fi
  printf "%s - %s - %s | " "$(date -d "+$i day" +%d)" "$(date -d "+$i day" +%a)" "$tag" 
  if [ $(( $i %5 )) -eq 0 ]; then printf "\n"; fi
  done
}
create_tag_unit_test

root@pop-os:/tmp# ./backup-tags.sh 
backup policy:  daily
22 - Fri - daily      | 23 - Sat - daily      | 24 - Sun - weekly     | 25 - Mon - daily      | 26 - Tue - daily      | 
27 - Wed - daily      | 28 - Thu - daily      | 29 - Fri - daily      | 30 - Sat - daily      | 01 - Sun - monthly    | 
02 - Mon - daily      | 03 - Tue - daily      | 04 - Wed - daily      | 05 - Thu - daily      | 06 - Fri - daily      | 
07 - Sat - daily      | 08 - Sun - weekly     | 09 - Mon - daily      | 10 - Tue - daily      | 11 - Wed - daily      | 
12 - Thu - daily      | 13 - Fri - daily      | 14 - Sat - daily      | 15 - Sun - weekly     | 16 - Mon - daily      | 
17 - Tue - daily      | 18 - Wed - daily      | 19 - Thu - daily      | 20 - Fri - daily      | 21 - Sat - daily      | 
22 - Sun - weekly     | 23 - Mon - daily      | 24 - Tue - daily      | 25 - Wed - daily      | 26 - Thu - daily      | 
27 - Fri - daily      | 28 - Sat - daily      | 29 - Sun - weekly     | 30 - Mon - daily      | 31 - Tue - daily      | 
01 - Wed - yearly     | 02 - Thu - daily      | 03 - Fri - daily      | 04 - Sat - daily      | 05 - Sun - weekly     | 
06 - Mon - daily      | 07 - Tue - daily      | 08 - Wed - daily      | 09 - Thu - daily      | 10 - Fri - daily      | 
11 - Sat - daily      | 12 - Sun - weekly     | 13 - Mon - daily      | 14 - Tue - daily      | 15 - Wed - daily      | 
16 - Thu - daily      | 17 - Fri - daily      | 18 - Sat - daily      | 19 - Sun - weekly     | 20 - Mon - daily      | 

Below is the restic backup script setting a tag and then snapshot forget based on the tag.

As always this is NOT tested use at your own risk.

My “policy” is:

  • weekly on Sunday
  • 01 of every month is a monthly except if 01 is also a new year which makes it a yearly
  • everything else is a daily
root@pop-os:~/scripts# cat desktop-restic.sh 
#!/bin/bash
### wake up backup server and restic backup to 3TB ZFS mirror
cd /root/scripts
./wake-backup-server.sh

source /root/.restic.env

## Quick and dirty logic for snapshot tagging
create_tag () {
  tag="daily"
  if [ $(date +%a) == "Sun" ]; then tag="weekly" ; fi
  if [ $(date +%d) == "01" ]; then
   tag="monthly"
   if [ $(date +%b) == "Jan" ]; then
     tag="yearly"
   fi
  fi
}

create_tag
restic backup -q /DATA /ARCHIVE --tag "$tag" --exclude *.vdi --exclude *.iso --exclude *.ova --exclude *.img --exclude *.vmdk

restic forget -q --tag daily --keep-last 7
restic forget -q --tag weekly --keep-last 4
restic forget -q --tag monthly --keep-last 12

if [ "$tag" == "weekly" ]; then
  restic -q prune
fi

sleep 1m
ssh user@192.168.1.250 sudo shutdown now

Restic scripting plus jq and minio client

I am jotting down some recent work on scripting restic and also using restic’s json output with jq and mc (minio client).

NOTE this is not production just example. Use at your own risk. These are edited by hand from real working scripts but since they are edited they will probably have typos etc in them. Again just examples!

Example backup script. Plus uploading json output to an object storage bucket for analysis later.

# cat restic-backup.sh
#!/bin/bash
source /root/.restic-keys
resticprog=/usr/local/bin/restic-custom
#rcloneargs="serve restic --stdio --b2-hard-delete --cache-workers 64 --transfers 64 --retries 21"
region="s3_phx"
rundate=$(date +"%Y-%m-%d-%H%M")
logtop=/reports
logyear=$(date +"%Y")
logmonth=$(date +"%m")
logname=$logtop/$logyear/$logmonth/restic/$rundate-restic-backup
jsonspool=/tmp/restic-fss-jobs

## Backing up some OCI FSS (same as AWS EFS) NFS folders
FSS=(
"fs-oracle-apps|fs-oracle-apps|.snapshot"           ## backup all exclude .snapshot tree
"fs-app1|fs-app1|.snapshot"                         ## backup all exclude .snapshot tree
"fs-sw|fs-sw/oracle_sw,fs-sw/restic_pkg|.snapshot"  ## backup two folders exclude .snapshot tree
"fs-tifs|fs-tifs|.snapshot,.tif"                  ## backup all exclude .snapshot tree and *.tif files
)

## test commands especially before kicking off large backups
function verify_cmds
{
  f=$1
  restic_cmd=$2
  printf "\n$rundate and cmd: $restic_cmd\n"
}

function backup
{
 f=$1
 restic_cmd=$2

 jobstart=$(date +"%Y-%m-%d-%H%M")

 mkdir $jsonspool/$f
 jsonfile=$jsonspool/$f/$jobstart-restic-backup.json
 printf "$jobstart with cmd: $restic_cmd\n"

 mkdir /mnt/$f
 mount -o ro xx.xx.xx.xx:/$f /mnt/$f

 ## TODO: shell issue with passing exclude from variable. verify exclude .snapshot is working
 ## TODO: not passing *.tif exclude fail?  howto pass *?
 $restic_cmd > $jsonfile

 #cat $jsonfile >> $logname-$f.log
 umount /mnt/$f
 rmdir /mnt/$f

## Using rclone to copy to OCI object storage bucket.
## Note the extra level folder so rclone can simulate 
## a server/20190711-restic.log style.
## Very useful with using minio client to analyze logs.
 rclone copy $jsonspool s3_ash:restic-backup-logs

 rm $jsonfile
 rmdir $jsonspool/$f

 jobfinish=$(date +"%Y-%m-%d-%H%M")
 printf "jobfinish $jobfinish\n"
}

for fss in "${FSS[@]}"; do
 arrFSS=(${fss//|/ })

 folders=""
 f=${arrFSS[0]}
 IFS=',' read -ra folderarr <<< ${arrFSS[1]}
 for folder in ${folderarr[@]};do folders+="/mnt/${folder} "; done

 excludearg=""
 IFS=',' read -ra excludearr <<< ${arrFSS[2]}
 for exclude in ${excludearr[@]};do excludearg+=" --exclude ${exclude}"; done

 backup_cmd="$resticprog -r rclone:$region:restic-$f backup ${folders} $excludearg --json"

## play with verify_cmds first before actual backups
 verify_cmds "$f" "$backup_cmd"
 #backup "$f" "$backup_cmd"
done

Since we have json logs in object storage lets check some of then with minio client.

# cat restic-check-logs.sh
#!/bin/bash

fss=(
 fs-oracle-apps
)

#checkdate="2019-07-11"
checkdate=$(date +"%Y-%m-%d")

for f in ${fss[@]}; do
  echo
  echo
  printf "$f:  "
  name=$(mc find s3-ash/restic-backup-logs/$f -name "*$checkdate*" | head -1)
  if [ -n "$name" ]
  then
    echo $name
    # play with sql --query later
    #mc sql --query "select * from S3Object"  --json-input .message_type=summary s3-ash/restic-backup-logs/$f/2019-07-09-1827-restic-backup.json
    mc cat $name  | jq -r 'select(.message_type=="summary")'
  else
    echo "Fail - no file found"
  fi
done

Example run of minio client against json

# ./restic-check-logs.sh

fs-oracle-apps:  s3-ash/restic-backup-logs/fs-oracle-apps/2019-07-12-0928-restic-backup.json
{
  "message_type": "summary",
  "files_new": 291,
  "files_changed": 1,
  "files_unmodified": 678976,
  "dirs_new": 0,
  "dirs_changed": 1,
  "dirs_unmodified": 0,
  "data_blobs": 171,
  "tree_blobs": 2,
  "data_added": 2244824,
  "total_files_processed": 679268,
  "total_bytes_processed": 38808398197,
  "total_duration": 1708.162522559,
  "snapshot_id": "f3e4dc06"
}

Note all of this was done with Oracle Cloud Infrastructure (OCI) object storage. Here are some observations around the OCI S3 compatible object storage.

  1. restic can not reach both us-ashburn-1 and us-phoenix-1 regions natively. s3:<tenant>.compat.objectstorage.us-ashburn-1.oraclecloud.com works but s3:<tenant>.compat.objectstorage.us-phoenix-1.oraclecloud.com does NOT work. Since restic can use rclone I am using rclone to access OCI object storage and rclone can reach both regions.
  2. rclone can reach both regions.
  3. minio command line client (mc) have the same issue as restic. Can reach us-ashburn-1 but not us-phoenix-1.
  4. minio python API can connect to us-ashburn-1 but shows an empty bucket list.

Restic json output and jq

Restic has the ability to show output in json. Here is how I used it for some CSV type reporting I needed on backup jobs.

Example json output.

# restic -r rclone:s3_phx:/restic-backup backup /root --json | jq -r 'select(.message_type=="summary")'
{
  "message_type": "summary",
  "files_new": 0,
  "files_changed": 0,
  "files_unmodified": 761,
  "dirs_new": 0,
  "dirs_changed": 0,
  "dirs_unmodified": 0,
  "data_blobs": 0,
  "tree_blobs": 0,
  "data_added": 0,
  "total_files_processed": 761,
  "total_bytes_processed": 251861194,
  "total_duration": 1.118076434,
  "snapshot_id": "09d1a6b9"
}

With @csv filter.

# restic -r rclone:s3_phx:/restic-backup backup /root --json | jq -r 'select(.message_type=="summary") | [.files_new,.files_changed,.files_unmodified,.dirs_new,.dirs_changed,.dirs_unmodified,.data_blobs,.tree_blobs,.data_added,.total_files_processed,.total_bytes_processed,.total_duration,.snapshot_id] | @csv'
0,0,764,0,0,0,0,0,0,764,251918381,1.037043765,"3a55b3b3"

I needed double quotes and could not figure out how to tell @csv filter to quote so below workaround for now. This was then usable in my bash script.

# restic -r rclone:s3_phx:/restic-backup backup /root --json | jq -r 'select(.message_type=="summary") | "\"\(.files_new)\",\"\(.files_changed)\",\"\(.files_unmodified)\",\"\(.dirs_new)\",\"\(.dirs_changed)\",\"\(.dirs_unmodified)\",\"\(.data_blobs)\",\"\(.tree_blobs)\",\"\(.data_added)\",\"\(.total_files_processed)\",\"\(.total_bytes_processed)\",\"\(.total_duration)\",\"\(.snapshot_id)\""'
"0","3","761","0","0","0","3","1","2901845","764","251920790","2.035211002","fb9d780b"

Restic and Oracle OCI Object Storage

It seems that after some time went by the S3 compatible object storage OCI interface can now work with restic directly and not necessary to use rclone. Tests a few months ago this did not work.

Using S3 directly mean we may not have this issue we see when using restic + rclone:
rclone: 2018/11/02 20:04:16 ERROR : data/fa/fadbb4f1d9172a4ecb591ddf5677b0889c16a8b98e5e3329d63aa152e235602e: Didn’t finish writing GET request (wrote 9086/15280 bytes): http2: stream closed

This shows how I setup restic to Oracle OCI object storage(no rclone required).

Current restic env pointing to rclone.conf
##########################################

# more /root/.restic-env 
export RESTIC_REPOSITORY="rclone:s3_servers_ashburn:bucket1"
export RESTIC_PASSWORD="blahblah"

# more /root/.config/rclone/rclone.conf 
[s3_servers_phoenix]
type = s3
env_auth = false
access_key_id =  
secret_access_key =  
region = us-phoenix-1
endpoint = <client-id>.compat.objectstorage.us-phoenix-1.oraclecloud.com
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 
[s3_servers_ashburn]
type = s3
env_auth = false
access_key_id =  
secret_access_key = 
region = us-ashburn-1
endpoint = <client-id>.compat.objectstorage.us-ashburn-1.oraclecloud.com
location_constraint =
acl = private
server_side_encryption =

New restic env pointing to S3 style
###################################

# more /root/.restic-env 
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export RESTIC_REPOSITORY="s3:<client-id>.compat.objectstorage.us-ashburn-1.oraclecloud.com/bucket1"
export RESTIC_PASSWORD="blahblah"

# . /root/.restic-env

# /usr/local/bin/restic snapshots
repository 26e5f447 opened successfully, password is correct
ID        Date                 Host             Tags        Directory
----------------------------------------------------------------------
dc9827fd  2018-08-31 21:20:02  server1                      /etc
cb311517  2018-08-31 21:20:04  server1                      /home
f65a3bb5  2018-08-31 21:20:06  server1                      /var
{...}
----------------------------------------------------------------------
36 snapshots