{"id":1194,"date":"2018-03-31T07:56:51","date_gmt":"2018-03-31T12:56:51","guid":{"rendered":"http:\/\/blog.ls-al.com\/?p=1194"},"modified":"2018-04-08T20:43:31","modified_gmt":"2018-04-09T01:43:31","slug":"borg-backup-and-rclone-to-object-storage","status":"publish","type":"post","link":"https:\/\/blog.ls-al.com\/borg-backup-and-rclone-to-object-storage\/","title":{"rendered":"Borg Backup and Rclone to Object Storage"},"content":{"rendered":"

I recently used Borg for protecting some critical files and jotting down some notes here.<\/p>\n

Borg exist in many distribution repos so easy to install. When not in a repo they have pre-compiled binaries that can easily be added to your Linux OS.<\/p>\n

Pick a server to act like your backup server (repository). Pretty much any Linux server where you can direct your client to send their backups to. You want to make your backup folder big enough of course.<\/p>\n

Using Borg backup across SSH with sshkeys<\/a>
\nhttps:\/\/opensource.com\/article\/17\/10\/backing-your-machines-borg<\/code><\/p>\n

# yum install borgbackup\r\n# useradd borg\r\n# passwd borg\r\n#\u00a0sudo\u00a0su\u00a0-\u00a0borg\u00a0\r\n$\u00a0mkdir\u00a0\/mnt\/backups\r\n$\u00a0cat\u00a0\/home\/borg\/.ssh\/authorized_keys\r\nssh-rsa\u00a0AAAAB3N[..]6N\/Yw==\u00a0root@server01\r\n$\u00a0borg\u00a0init\u00a0\/mnt\/backups\/repo1\u00a0-e\u00a0none<\/pre>\n

\u00a0**** CLIENT server01 with single binary(no repo for borgbackup on this server)<\/p>\n

$ sudo su - root\r\n# ssh-keygen \r\nGenerating public\/private rsa key pair.\r\nEnter file in which to save the key (\/root\/.ssh\/id_rsa): \/root\/.ssh\/borg_key\r\nCreated directory '\/root\/.ssh'.\r\nEnter passphrase (empty for no passphrase): \r\nEnter same passphrase again: \r\nYour identification has been saved in \/root\/.ssh\/borg_key.\r\nYour public key has been saved in \/root\/.ssh\/borg_key.pub.\r\n\r\n# .\/backup.sh \r\nWarning: Attempting to access a previously unknown unencrypted repository!\r\nDo you want to continue? [yN] y\r\nSynchronizing chunks cache...\r\nArchives: 0, w\/ cached Idx: 0, w\/ outdated Idx: 0, w\/o cached Idx: 0.\r\nDone.\r\n------------------------------------------------------------------------------\r\nArchive name: server01-2018-03-29\r\nArchive fingerprint: 79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3\r\nTime (start): Thu, 2018-03-29 19:32:45\r\nTime (end):   Thu, 2018-03-29 19:32:47\r\nDuration: 1.36 seconds\r\nNumber of files: 1069\r\nUtilization of max. archive size: 0%\r\n------------------------------------------------------------------------------\r\n                       Original size      Compressed size    Deduplicated size\r\nThis archive:               42.29 MB             15.41 MB             11.84 MB\r\nAll archives:               42.29 MB             15.41 MB             11.84 MB\r\n\r\n                       Unique chunks         Total chunks\r\nChunk index:                    1023                 1059\r\n------------------------------------------------------------------------------\r\nKeeping archive: server01-2018-03-29                     Thu, 2018-03-29 19:32:45 [79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3]\r\n<\/pre>\n

*** RECOVER test. Done on BORG server directly but will test from client directly also. May need BORG_RSH variable.<\/p>\n

$ borg list repo1\r\nserver01-2018-03-29                     Thu, 2018-03-29 19:32:45 [79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3]\r\n\r\n$ borg list repo1::server01-2018-03-29 | less\r\n\r\n$ cd \/tmp\r\n$ borg extract \/mnt\/backups\/repo1::server01-2018-03-29  etc\/hosts\r\n\r\n$ ls -l etc\/hosts \r\n-rw-r--r--. 1 borg borg 389 Mar 26 15:50 etc\/hosts\r\n<\/pre>\n

APPENDIX: client backup.sh cron and source<\/p>\n

# crontab -l\r\n0 0 * * * \/root\/scripts\/backup.sh &amp;gt; \/dev\/null 2&amp;gt;&amp;amp;1\r\n\r\n# sudo su - root\r\n# cd scripts\/\r\n# cat backup.sh \r\n#!\/usr\/bin\/env bash\r\n\r\n##\r\n## Set environment variables\r\n##\r\n\r\n## if you don't use the standard SSH key,\r\n## you have to specify the path to the key like this\r\nexport BORG_RSH='ssh -i \/root\/.ssh\/borg_key'\r\n\r\n## You can save your borg passphrase in an environment\r\n## variable, so you don't need to type it in when using borg\r\n# export BORG_PASSPHRASE=\"top_secret_passphrase\"\r\n\r\n##\r\n## Set some variables\r\n##\r\n\r\nLOG=\"\/var\/log\/borg\/backup.log\"\r\nBACKUP_USER=\"borg\"\r\nREPOSITORY=\"ssh:\/\/${BACKUP_USER}@10.1.1.2\/mnt\/backups\/repo1\"\r\n\r\n#export BORG_PASSCOMMAND=''\r\n\r\n#Bail if borg is already running, maybe previous run didn't finish\r\nif pidof -x borg &amp;gt;\/dev\/null; then\r\n    echo \"Backup already running\"\r\n    exit\r\nfi\r\n\r\n##\r\n## Output to a logfile\r\n##\r\n\r\nexec &amp;gt; &amp;gt;(tee -i ${LOG})\r\nexec 2&amp;gt;&amp;amp;1\r\n\r\necho \"###### Backup started: $(date) ######\"\r\n\r\n##\r\n## At this place you could perform different tasks\r\n## that will take place before the backup, e.g.\r\n##\r\n## - Create a list of installed software\r\n## - Create a database dump\r\n##\r\n\r\n##\r\n## Transfer the files into the repository.\r\n## In this example the folders root, etc,\r\n## var\/www and home will be saved.\r\n## In addition you find a list of excludes that should not\r\n## be in a backup and are excluded by default.\r\n##\r\n\r\necho \"Transfer files ...\"\r\n\/usr\/local\/bin\/borg create -v --stats                   \\\r\n    $REPOSITORY::'{hostname}-{now:%Y-%m-%d}'    \\\r\n    \/root                                \\\r\n    \/etc                                 \\\r\n    \/u01                                 \\\r\n    \/home                                \\\r\n    --exclude \/dev                       \\\r\n    --exclude \/proc                      \\\r\n    --exclude \/sys                       \\\r\n    --exclude \/var\/run                   \\\r\n    --exclude \/run                       \\\r\n    --exclude \/lost+found                \\\r\n    --exclude \/mnt                       \\\r\n    --exclude \/var\/lib\/lxcfs\r\n\r\n\r\n# Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly\r\n# archives of THIS machine. The '{hostname}-' prefix is very important to\r\n# limit prune's operation to this machine's archives and not apply to\r\n# other machine's archives also.\r\n\/usr\/local\/bin\/borg prune -v --list $REPOSITORY --prefix '{hostname}-' \\\r\n    --keep-daily=7 --keep-weekly=4 --keep-monthly=6\r\n\r\necho \"###### Backup ended: $(date) ######\"\r\n<\/pre>\n

In addition to using Borg this test was also about pushing backups to Oracle OCI object storage so below is some steps I followed. I had to use the newest rclone because v1.36 had weird issues with the Oracle OCI S3 interface.<\/p>\n

# curl https:\/\/rclone.org\/install.sh | sudo bash\r\n\r\n# df -h | grep borg\r\n\/dev\/mapper\/vg01-vg01--lv01  980G  7.3G  973G   1% \/mnt\/backups\r\n\r\n# sudo su - borg\r\n\r\n[$ cat ~\/.config\/rclone\/rclone.conf \r\n[s3_backups]\r\ntype = s3\r\nenv_auth = false\r\naccess_key_id = ocid1.credential.oc1..aaaa[snipped]\r\nsecret_access_key = KJFevw6s=\r\nregion = us-ashburn-1\r\nendpoint = [snipped].compat.objectstorage.us-ashburn-1.oraclecloud.com\r\nlocation_constraint = \r\nacl = private\r\nserver_side_encryption = \r\nstorage_class = \r\n\r\n$ rclone  lsd s3_backups: \r\n          -1 2018-03-27 21:07:11        -1 backups\r\n          -1 2018-03-29 13:39:42        -1 repo1\r\n          -1 2018-03-26 22:23:35        -1 terraform\r\n          -1 2018-03-27 14:34:55        -1 terraform-src<\/pre>\n

Initial sync. Note I am using sync but be warned you need to figure out if you want to use copy or sync. As far as I know sync may delete not only on target but also on source when syncing.<\/p>\n

$ \/usr\/bin\/rclone -v sync \/mnt\/borg\/repo1 s3_backups:repo1\r\n2018\/03\/29 22:37:00 INFO  : S3 bucket repo1: Modify window is 1ns\r\n2018\/03\/29 22:37:00 INFO  : README: Copied (replaced existing)\r\n2018\/03\/29 22:37:00 INFO  : hints.38: Copied (new)\r\n2018\/03\/29 22:37:00 INFO  : integrity.38: Copied (new)\r\n2018\/03\/29 22:37:00 INFO  : data\/0\/17: Copied (new)\r\n2018\/03\/29 22:37:00 INFO  : config: Copied (replaced existing)\r\n2018\/03\/29 22:37:00 INFO  : data\/0\/18: Copied (new)\r\n2018\/03\/29 22:37:00 INFO  : index.38: Copied (new)\r\n2018\/03\/29 22:37:59 INFO  : data\/0\/24: Copied (new)\r\n2018\/03\/29 22:38:00 INFO  : \r\nTransferred:   1.955 GBytes (33.361 MBytes\/s)\r\nErrors:                 0\r\nChecks:                 2\r\nTransferred:            8\r\nElapsed time:        1m0s\r\nTransferring:\r\n *                                     data\/0\/21: 100% \/501.284M, 16.383M\/s, 0s\r\n *                                     data\/0\/22: 98% \/500.855M, 18.072M\/s, 0s\r\n *                                     data\/0\/23: 100% \/500.951M, 14.231M\/s, 0s\r\n *                                     data\/0\/25:  0% \/501.379M, 0\/s, -\r\n\r\n2018\/03\/29 22:38:00 INFO  : data\/0\/22: Copied (new)\r\n2018\/03\/29 22:38:00 INFO  : data\/0\/23: Copied (new)\r\n2018\/03\/29 22:38:01 INFO  : data\/0\/21: Copied (new)\r\n2018\/03\/29 22:38:57 INFO  : data\/0\/25: Copied (new)\r\n2018\/03\/29 22:38:58 INFO  : data\/0\/27: Copied (new)\r\n2018\/03\/29 22:38:59 INFO  : data\/0\/26: Copied (new)\r\n2018\/03\/29 22:38:59 INFO  : data\/0\/28: Copied (new)\r\n2018\/03\/29 22:39:00 INFO  : \r\nTransferred:   3.919 GBytes (33.438 MBytes\/s)\r\nErrors:                 0\r\nChecks:                 2\r\nTransferred:           15\r\nElapsed time:        2m0s\r\nTransferring:\r\n *                                     data\/0\/29:  0% \/500.335M, 0\/s, -\r\n *                                     data\/0\/30:  0% \/500.294M, 0\/s, -\r\n *                                     data\/0\/31:  0% \/500.393M, 0\/s, -\r\n *                                     data\/0\/32:  0% \/500.264M, 0\/s, -\r\n\r\n2018\/03\/29 22:39:45 INFO  : data\/0\/29: Copied (new)\r\n2018\/03\/29 22:39:52 INFO  : data\/0\/30: Copied (new)\r\n2018\/03\/29 22:39:52 INFO  : S3 bucket repo1: Waiting for checks to finish\r\n2018\/03\/29 22:39:55 INFO  : data\/0\/32: Copied (new)\r\n2018\/03\/29 22:39:55 INFO  : S3 bucket repo1: Waiting for transfers to finish\r\n2018\/03\/29 22:39:56 INFO  : data\/0\/31: Copied (new)\r\n2018\/03\/29 22:39:57 INFO  : data\/0\/36: Copied (new)\r\n2018\/03\/29 22:39:57 INFO  : data\/0\/37: Copied (new)\r\n2018\/03\/29 22:39:57 INFO  : data\/0\/38: Copied (new)\r\n2018\/03\/29 22:39:58 INFO  : data\/0\/1: Copied (replaced existing)\r\n2018\/03\/29 22:40:00 INFO  : \r\nTransferred:   5.874 GBytes (33.413 MBytes\/s)\r\nErrors:                 0\r\nChecks:                 3\r\nTransferred:           23\r\nElapsed time:        3m0s\r\nTransferring:\r\n *                                     data\/0\/33:  0% \/500.895M, 0\/s, -\r\n *                                     data\/0\/34:  0% \/501.276M, 0\/s, -\r\n *                                     data\/0\/35:  0% \/346.645M, 0\/s, -\r\n\r\n2018\/03\/29 22:40:25 INFO  : data\/0\/35: Copied (new)\r\n2018\/03\/29 22:40:28 INFO  : data\/0\/33: Copied (new)\r\n2018\/03\/29 22:40:30 INFO  : data\/0\/34: Copied (new)\r\n2018\/03\/29 22:40:30 INFO  : Waiting for deletions to finish\r\n2018\/03\/29 22:40:30 INFO  : data\/0\/3: Deleted\r\n2018\/03\/29 22:40:30 INFO  : index.3: Deleted\r\n2018\/03\/29 22:40:30 INFO  : hints.3: Deleted\r\n2018\/03\/29 22:40:30 INFO  : \r\nTransferred:   7.191 GBytes (34.943 MBytes\/s)\r\nErrors:                 0\r\nChecks:                 6\r\nTransferred:           26\r\nElapsed time:     3m30.7s<\/pre>\n

Run another sync showing nothing to do.<\/p>\n

$ \/usr\/bin\/rclone -v sync \/mnt\/borg\/repo1 s3_backups:repo1\r\n2018\/03\/29 22:43:13 INFO  : S3 bucket repo1: Modify window is 1ns\r\n2018\/03\/29 22:43:13 INFO  : S3 bucket repo1: Waiting for checks to finish\r\n2018\/03\/29 22:43:13 INFO  : S3 bucket repo1: Waiting for transfers to finish\r\n2018\/03\/29 22:43:13 INFO  : Waiting for deletions to finish\r\n2018\/03\/29 22:43:13 INFO  : \r\nTransferred:      0 Bytes (0 Bytes\/s)\r\nErrors:                 0\r\nChecks:                26\r\nTransferred:            0\r\nElapsed time:       100ms<\/pre>\n

Test script and check log<\/p>\n

$ cd scripts\/\r\n$ .\/s3_backup.sh \r\n$ more ..\/s3_backups.log \r\n2018\/03\/29 22:43:56 INFO  : S3 bucket repo1: Modify window is 1ns\r\n2018\/03\/29 22:43:56 INFO  : S3 bucket repo1: Waiting for checks to finish\r\n2018\/03\/29 22:43:56 INFO  : S3 bucket repo1: Waiting for transfers to finish\r\n2018\/03\/29 22:43:56 INFO  : Waiting for deletions to finish\r\n2018\/03\/29 22:43:56 INFO  : \r\nTransferred:      0 Bytes (0 Bytes\/s)\r\nErrors:                 0\r\nChecks:                26\r\nTransferred:            0\r\nElapsed time:       100ms<\/pre>\n

Check size used on object storage.<\/p>\n

$ rclone size s3_backups:repo1\r\nTotal objects: 26\r\nTotal size: 7.191 GBytes (7721115523 Bytes)<\/pre>\n

APPENDIX: s3_backup.sh crontab and source<\/p>\n

$ crontab -l\r\n50 23 * * * \/home\/borg\/scripts\/s3_backup.sh\r\n\r\n$ cat s3_backup.sh \r\n#!\/bin\/bash\r\nset -e\r\n\r\n#repos=( repo1 repo2 repo3 )\r\nrepos=( repo1 )\r\n\r\n#Bail if rclone is already running, maybe previous run didn't finish\r\nif pidof -x rclone &amp;gt;\/dev\/null; then\r\n    echo \"Process already running\"\r\n    exit\r\nfi\r\n\r\nfor i in \"${repos[@]}\"\r\ndo\r\n    #Lets see how much space is used by directory to back up\r\n    #if directory is gone, or has gotten small, we will exit\r\n    space=`du -s \/mnt\/backups\/$i|awk '{print $1}'`\r\n\r\n    if (( $space &amp;lt; 3450000 )); then echo \"EXITING - not enough space used in $i\" exit fi \/usr\/bin\/rclone -v sync \/mnt\/backups\/$i s3_backups:$i &amp;gt;&amp;gt; \/home\/borg\/s3_backups.log 2&amp;gt;&amp;amp;1\r\ndone<\/pre>\n","protected":false},"excerpt":{"rendered":"

I recently used Borg for protecting some critical files and jotting down some notes here. Borg exist in many distribution<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,96],"tags":[],"class_list":["post-1194","post","type-post","status-publish","format-standard","hentry","category-backups","category-rclone"],"_links":{"self":[{"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/posts\/1194","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/comments?post=1194"}],"version-history":[{"count":0,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/posts\/1194\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/media?parent=1194"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/categories?post=1194"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/tags?post=1194"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}