Borg Backup and Rclone to Object Storage
I recently used Borg for protecting some critical files and jotting down some notes here.
Borg exist in many distribution repos so easy to install. When not in a repo they have pre-compiled binaries that can easily be added to your Linux OS.
Pick a server to act like your backup server (repository). Pretty much any Linux server where you can direct your client to send their backups to. You want to make your backup folder big enough of course.
Using Borg backup across SSH with sshkeys
https://opensource.com/article/17/10/backing-your-machines-borg
# yum install borgbackup # useradd borg # passwd borg # sudo su - borg $ mkdir /mnt/backups $ cat /home/borg/.ssh/authorized_keys ssh-rsa AAAAB3N[..]6N/Yw== root@server01 $ borg init /mnt/backups/repo1 -e none
**** CLIENT server01 with single binary(no repo for borgbackup on this server)
$ sudo su - root # ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/borg_key Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/borg_key. Your public key has been saved in /root/.ssh/borg_key.pub. # ./backup.sh Warning: Attempting to access a previously unknown unencrypted repository! Do you want to continue? [yN] y Synchronizing chunks cache... Archives: 0, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 0. Done. ------------------------------------------------------------------------------ Archive name: server01-2018-03-29 Archive fingerprint: 79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3 Time (start): Thu, 2018-03-29 19:32:45 Time (end): Thu, 2018-03-29 19:32:47 Duration: 1.36 seconds Number of files: 1069 Utilization of max. archive size: 0% ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 42.29 MB 15.41 MB 11.84 MB All archives: 42.29 MB 15.41 MB 11.84 MB Unique chunks Total chunks Chunk index: 1023 1059 ------------------------------------------------------------------------------ Keeping archive: server01-2018-03-29 Thu, 2018-03-29 19:32:45 [79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3]
*** RECOVER test. Done on BORG server directly but will test from client directly also. May need BORG_RSH variable.
$ borg list repo1 server01-2018-03-29 Thu, 2018-03-29 19:32:45 [79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3] $ borg list repo1::server01-2018-03-29 | less $ cd /tmp $ borg extract /mnt/backups/repo1::server01-2018-03-29 etc/hosts $ ls -l etc/hosts -rw-r--r--. 1 borg borg 389 Mar 26 15:50 etc/hosts
APPENDIX: client backup.sh cron and source
# crontab -l 0 0 * * * /root/scripts/backup.sh > /dev/null 2>&1 # sudo su - root # cd scripts/ # cat backup.sh #!/usr/bin/env bash ## ## Set environment variables ## ## if you don't use the standard SSH key, ## you have to specify the path to the key like this export BORG_RSH='ssh -i /root/.ssh/borg_key' ## You can save your borg passphrase in an environment ## variable, so you don't need to type it in when using borg # export BORG_PASSPHRASE="top_secret_passphrase" ## ## Set some variables ## LOG="/var/log/borg/backup.log" BACKUP_USER="borg" REPOSITORY="ssh://${BACKUP_USER}@10.1.1.2/mnt/backups/repo1" #export BORG_PASSCOMMAND='' #Bail if borg is already running, maybe previous run didn't finish if pidof -x borg >/dev/null; then echo "Backup already running" exit fi ## ## Output to a logfile ## exec > >(tee -i ${LOG}) exec 2>&1 echo "###### Backup started: $(date) ######" ## ## At this place you could perform different tasks ## that will take place before the backup, e.g. ## ## - Create a list of installed software ## - Create a database dump ## ## ## Transfer the files into the repository. ## In this example the folders root, etc, ## var/www and home will be saved. ## In addition you find a list of excludes that should not ## be in a backup and are excluded by default. ## echo "Transfer files ..." /usr/local/bin/borg create -v --stats \ $REPOSITORY::'{hostname}-{now:%Y-%m-%d}' \ /root \ /etc \ /u01 \ /home \ --exclude /dev \ --exclude /proc \ --exclude /sys \ --exclude /var/run \ --exclude /run \ --exclude /lost+found \ --exclude /mnt \ --exclude /var/lib/lxcfs # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The '{hostname}-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machine's archives also. /usr/local/bin/borg prune -v --list $REPOSITORY --prefix '{hostname}-' \ --keep-daily=7 --keep-weekly=4 --keep-monthly=6 echo "###### Backup ended: $(date) ######"
In addition to using Borg this test was also about pushing backups to Oracle OCI object storage so below is some steps I followed. I had to use the newest rclone because v1.36 had weird issues with the Oracle OCI S3 interface.
# curl https://rclone.org/install.sh | sudo bash # df -h | grep borg /dev/mapper/vg01-vg01--lv01 980G 7.3G 973G 1% /mnt/backups # sudo su - borg [$ cat ~/.config/rclone/rclone.conf [s3_backups] type = s3 env_auth = false access_key_id = ocid1.credential.oc1..aaaa[snipped] secret_access_key = KJFevw6s= region = us-ashburn-1 endpoint = [snipped].compat.objectstorage.us-ashburn-1.oraclecloud.com location_constraint = acl = private server_side_encryption = storage_class = $ rclone lsd s3_backups: -1 2018-03-27 21:07:11 -1 backups -1 2018-03-29 13:39:42 -1 repo1 -1 2018-03-26 22:23:35 -1 terraform -1 2018-03-27 14:34:55 -1 terraform-src
Initial sync. Note I am using sync but be warned you need to figure out if you want to use copy or sync. As far as I know sync may delete not only on target but also on source when syncing.
$ /usr/bin/rclone -v sync /mnt/borg/repo1 s3_backups:repo1 2018/03/29 22:37:00 INFO : S3 bucket repo1: Modify window is 1ns 2018/03/29 22:37:00 INFO : README: Copied (replaced existing) 2018/03/29 22:37:00 INFO : hints.38: Copied (new) 2018/03/29 22:37:00 INFO : integrity.38: Copied (new) 2018/03/29 22:37:00 INFO : data/0/17: Copied (new) 2018/03/29 22:37:00 INFO : config: Copied (replaced existing) 2018/03/29 22:37:00 INFO : data/0/18: Copied (new) 2018/03/29 22:37:00 INFO : index.38: Copied (new) 2018/03/29 22:37:59 INFO : data/0/24: Copied (new) 2018/03/29 22:38:00 INFO : Transferred: 1.955 GBytes (33.361 MBytes/s) Errors: 0 Checks: 2 Transferred: 8 Elapsed time: 1m0s Transferring: * data/0/21: 100% /501.284M, 16.383M/s, 0s * data/0/22: 98% /500.855M, 18.072M/s, 0s * data/0/23: 100% /500.951M, 14.231M/s, 0s * data/0/25: 0% /501.379M, 0/s, - 2018/03/29 22:38:00 INFO : data/0/22: Copied (new) 2018/03/29 22:38:00 INFO : data/0/23: Copied (new) 2018/03/29 22:38:01 INFO : data/0/21: Copied (new) 2018/03/29 22:38:57 INFO : data/0/25: Copied (new) 2018/03/29 22:38:58 INFO : data/0/27: Copied (new) 2018/03/29 22:38:59 INFO : data/0/26: Copied (new) 2018/03/29 22:38:59 INFO : data/0/28: Copied (new) 2018/03/29 22:39:00 INFO : Transferred: 3.919 GBytes (33.438 MBytes/s) Errors: 0 Checks: 2 Transferred: 15 Elapsed time: 2m0s Transferring: * data/0/29: 0% /500.335M, 0/s, - * data/0/30: 0% /500.294M, 0/s, - * data/0/31: 0% /500.393M, 0/s, - * data/0/32: 0% /500.264M, 0/s, - 2018/03/29 22:39:45 INFO : data/0/29: Copied (new) 2018/03/29 22:39:52 INFO : data/0/30: Copied (new) 2018/03/29 22:39:52 INFO : S3 bucket repo1: Waiting for checks to finish 2018/03/29 22:39:55 INFO : data/0/32: Copied (new) 2018/03/29 22:39:55 INFO : S3 bucket repo1: Waiting for transfers to finish 2018/03/29 22:39:56 INFO : data/0/31: Copied (new) 2018/03/29 22:39:57 INFO : data/0/36: Copied (new) 2018/03/29 22:39:57 INFO : data/0/37: Copied (new) 2018/03/29 22:39:57 INFO : data/0/38: Copied (new) 2018/03/29 22:39:58 INFO : data/0/1: Copied (replaced existing) 2018/03/29 22:40:00 INFO : Transferred: 5.874 GBytes (33.413 MBytes/s) Errors: 0 Checks: 3 Transferred: 23 Elapsed time: 3m0s Transferring: * data/0/33: 0% /500.895M, 0/s, - * data/0/34: 0% /501.276M, 0/s, - * data/0/35: 0% /346.645M, 0/s, - 2018/03/29 22:40:25 INFO : data/0/35: Copied (new) 2018/03/29 22:40:28 INFO : data/0/33: Copied (new) 2018/03/29 22:40:30 INFO : data/0/34: Copied (new) 2018/03/29 22:40:30 INFO : Waiting for deletions to finish 2018/03/29 22:40:30 INFO : data/0/3: Deleted 2018/03/29 22:40:30 INFO : index.3: Deleted 2018/03/29 22:40:30 INFO : hints.3: Deleted 2018/03/29 22:40:30 INFO : Transferred: 7.191 GBytes (34.943 MBytes/s) Errors: 0 Checks: 6 Transferred: 26 Elapsed time: 3m30.7s
Run another sync showing nothing to do.
$ /usr/bin/rclone -v sync /mnt/borg/repo1 s3_backups:repo1 2018/03/29 22:43:13 INFO : S3 bucket repo1: Modify window is 1ns 2018/03/29 22:43:13 INFO : S3 bucket repo1: Waiting for checks to finish 2018/03/29 22:43:13 INFO : S3 bucket repo1: Waiting for transfers to finish 2018/03/29 22:43:13 INFO : Waiting for deletions to finish 2018/03/29 22:43:13 INFO : Transferred: 0 Bytes (0 Bytes/s) Errors: 0 Checks: 26 Transferred: 0 Elapsed time: 100ms
Test script and check log
$ cd scripts/ $ ./s3_backup.sh $ more ../s3_backups.log 2018/03/29 22:43:56 INFO : S3 bucket repo1: Modify window is 1ns 2018/03/29 22:43:56 INFO : S3 bucket repo1: Waiting for checks to finish 2018/03/29 22:43:56 INFO : S3 bucket repo1: Waiting for transfers to finish 2018/03/29 22:43:56 INFO : Waiting for deletions to finish 2018/03/29 22:43:56 INFO : Transferred: 0 Bytes (0 Bytes/s) Errors: 0 Checks: 26 Transferred: 0 Elapsed time: 100ms
Check size used on object storage.
$ rclone size s3_backups:repo1 Total objects: 26 Total size: 7.191 GBytes (7721115523 Bytes)
APPENDIX: s3_backup.sh crontab and source
$ crontab -l 50 23 * * * /home/borg/scripts/s3_backup.sh $ cat s3_backup.sh #!/bin/bash set -e #repos=( repo1 repo2 repo3 ) repos=( repo1 ) #Bail if rclone is already running, maybe previous run didn't finish if pidof -x rclone >/dev/null; then echo "Process already running" exit fi for i in "${repos[@]}" do #Lets see how much space is used by directory to back up #if directory is gone, or has gotten small, we will exit space=`du -s /mnt/backups/$i|awk '{print $1}'` if (( $space < 3450000 )); then echo "EXITING - not enough space used in $i" exit fi /usr/bin/rclone -v sync /mnt/backups/$i s3_backups:$i >> /home/borg/s3_backups.log 2>&1 done