Category: OCI

Jul 07

OCI Network Load Balancer Source Header Preservation

In my case running Traefik on docker I was not getting real ip addresses. I changed the NLB option Source header (IP, port) preservation: Enabled

To change this you need to remove the targets from the backend set first.

At the same time suddenly the console did not allow me to add the VM.Standard.A1.Flex server back into the backend set. It required me to upgrade to a paid account. Which is nonsense since I used this server as a target for a long time and now suddenly they want to be sneaky with free options. At least the CLI did add the IP back in.

❯ ocicli nlb backend create --backend-set-name server01-443 --network-load-balancer-id  --port 443 --ip-address 10.0.10.226

Comments Off on OCI Network Load Balancer Source Header Preservation
comments

Mar 16

Bash alias inside a script

If you need to use an alias inside a script you need this:

shopt -s expand_aliases
source ~/.bash_aliases

I recently started using the docker OCI client instead of trying to install it local. For some reason it is just not working. So now I use the docker image but as you can see you dont want to be using this command everytime.

docker run --rm -it -v "$HOME/.oci:/oracle/.oci" oci

So an alias is helpful but as mentioned wont just work in you script. Example how I use the command in a script and this works. My alias is ocicli.

CREATED_APPLY_JOB_ID=$(ocicli resource-manager job create-apply-job --stack-id $CREATED_STACK_ID --execution-plan-strategy FROM_PLAN_JOB_ID --execution-plan-job-id "$CREATED_PLAN_JOB_ID" --wait-for-state SUCCEEDED --query 'data.id' --raw-output)

Comments Off on Bash alias inside a script
comments

May 20

Test OCI (Oracle Cloud Infrastructure) Vault Secret

assume oci cli working

test an old cli script to list buckets

$ ./list_buckets.sh

{
      "data": [
        {
          "compartment-id": "*masked*",
          "created-by": "*masked*",
          "defined-tags": null,
          "etag": "*masked*",
          "freeform-tags": null,
          "name": "bucket-20200217-1256",
          "namespace": "*masked*",
          "time-created": "2020-02-17T18:56:07.773000+00:00"
        }
      ]
}

test old python script

$ python3 show_user.py 
{
      "capabilities": {
        "can_use_api_keys": true,
        "can_use_auth_tokens": true,
        "can_use_console_password": true,
        "can_use_customer_secret_keys": true,
        "can_use_o_auth2_client_credentials": true,
        "can_use_smtp_credentials": true
      },
      "compartment_id": "*masked*",
      "defined_tags": {},
      "description": "*masked*",
      "email": "*masked*",
      "external_identifier": null,
      "freeform_tags": {},
      "id": "*masked*",
      "identity_provider_id": null,
      "inactive_status": null,
      "is_mfa_activated": false,
      "lifecycle_state": "ACTIVE",
      "name": "*masked*",
      "time_created": "2020-02-11T18:24:37.809000+00:00"
}

create secret in console

  • Security > Vault > testvault
  • Create key rr
  • Create secret rr

test python code

$ python3 check-secret.py *masked*
    Reading vaule of secret_id *masked*.
    Decoded content of the secret is: blah.

test cli

$ oci vault secret list --compartment-id *masked*

     "data": [
       {
         "compartment-id": "*masked*",
         "defined-tags": {
           "Oracle-Tags": {
             "CreatedBy": "*masked*",
             "CreatedOn": "2020-05-19T19:13:52.028Z"
           }
         },
         "description": "test",
         "freeform-tags": {},
         "id": "*masked*",
         "key-id": "*masked*",
         "lifecycle-details": null,
         "lifecycle-state": "ACTIVE",
         "secret-name": "rr",
         "time-created": "2020-05-19T19:13:51.804000+00:00",
         "time-of-current-version-expiry": null,
         "time-of-deletion": null,
         "vault-id": "*masked*"
       }
     ]
    }

$ oci vault secret get --secret-id *masked*
    {
      "data": {
        "compartment-id": "*masked*",
        "current-version-number": 1,
        "defined-tags": {
          "Oracle-Tags": {
            "CreatedBy": "*masked*",
            "CreatedOn": "2020-05-19T19:13:52.028Z"
          }
        },
        "description": "test",
        "freeform-tags": {},
        "id": "*masked*",
        "key-id": "*masked*",
        "lifecycle-details": null,
        "lifecycle-state": "ACTIVE",
        "metadata": null,
        "secret-name": "rr",
        "secret-rules": [],
        "time-created": "2020-05-19T19:13:51.804000+00:00",
        "time-of-current-version-expiry": null,
        "time-of-deletion": null,
        "vault-id": "*masked*"
      },
      "etag": "*masked*"
    }

$ oci secrets secret-bundle get --secret-id *masked*
    {
      "data": {
        "metadata": null,
        "secret-bundle-content": {
          "content": "YmxhaA==",
          "content-type": "BASE64"
        },
        "secret-id": "*masked*",
        "stages": [
          "CURRENT",
          "LATEST"
        ],
        "time-created": "2020-05-19T19:13:51.804000+00:00",
        "time-of-deletion": null,
        "time-of-expiry": null,
        "version-name": null,
        "version-number": 1
      },
      "etag": "*masked*--gzip"
    }

$ echo YmxhaA== | base64 --decode
    blah

one liner

$ oci secrets secret-bundle get --secret-id ocid1.vaultsecret.oc1.phx.*masked* --query "data .{s:\"secret-bundle-content\"}" | jq -r '.s.content' | base64 --decode
blah

Comments Off on Test OCI (Oracle Cloud Infrastructure) Vault Secret
comments

Nov 23

restic option to configure S3 region

If you find yourself relying on restic using rclone to talk to non-default regions you may want to check out the just released restic version 0.9.6. To me it appears to be fixed when working with Oracle Cloud Infrastructure (OCI) object storage. Below shows a test accessing Phoenix endpoint with the new -o option.

# restic -r s3:<tenancy_name>.compat.objectstorage.us-phoenix-1.oraclecloud.com/restic-backups snapshots -o s3.region=us-phoenix-1
repository <....> opened successfully, password is correct
ID        Time                 Host                          Tags        Paths
----------------------------------------------------------------------------------------
f23784fd  2019-10-27 05:10:02  host01.domain.com  mytag     /etc

Comments Off on restic option to configure S3 region
comments

Nov 01

Oracle OCI CLI Query

Some bash snippets of using --query, jq and interacting with Bash to manipulate into variables.

Collect boot volume's id

SRCBOOTVOLID=$(oci --profile $profile bv boot-volume list --compartment-id "$source_compartment" --availability-domain "$source_ad" --query "data [?\"display-name\" == '$instance_name (Boot Volume)'].{id:id}" | jq -r '.[] | .id')

Collect instance ocid

INSTANCEID=$(oci --profile $profile compute instance launch --availability-domain $target_ad --compartment-id $sandbox_compartment --shape VM.Standard1.1 --display-name "burner-$instance_name-instance-for-custom-image" --source-boot-volume-id $BOOTVOLID --wait-for-state RUNNING --subnet-id $sandbox_subnetid --query "data .{id:id}" | jq -r '. | .id')

Stop instance and collect the id (or whatever you need from the json)

STOPPEDID=$(oci --profile $profile compute instance action --action STOP --instance-id $INSTANCEID --wait-for-state STOPPED --query "data .{id:id}" | jq -r '. | .id')

Collect the work-request-id to monitor in a loop after I export a custom image to object storage. Note in the query the field I need is NOT in the data section.

WORKREQUESTID=$(oci --profile $profile compute image export to-object --image-id $IMAGEID --namespace faketenancy --bucket-name DR-Images --name $today-$instance_name-custom-image-object --query '"opc-work-request-id"' --raw-output)

while [ "$RESULT" != "SUCCEEDED" ]
do
  RESULT=$(oci --profile myprofile work-requests work-request get --work-request-id $WORKREQUESTID --query "data .{status:status}" | jq -r '. | .status')
  echo "running export job and $RESULT checking every 2 mins"
  sleep 2m
done

Comments Off on Oracle OCI CLI Query
comments

Jul 12

Restic scripting plus jq and minio client

I am jotting down some recent work on scripting restic and also using restic's json output with jq and mc (minio client).

NOTE this is not production just example. Use at your own risk. These are edited by hand from real working scripts but since they are edited they will probably have typos etc in them. Again just examples!

Example backup script. Plus uploading json output to an object storage bucket for analysis later.

# cat restic-backup.sh
#!/bin/bash
source /root/.restic-keys
resticprog=/usr/local/bin/restic-custom
#rcloneargs="serve restic --stdio --b2-hard-delete --cache-workers 64 --transfers 64 --retries 21"
region="s3_phx"
rundate=$(date +"%Y-%m-%d-%H%M")
logtop=/reports
logyear=$(date +"%Y")
logmonth=$(date +"%m")
logname=$logtop/$logyear/$logmonth/restic/$rundate-restic-backup
jsonspool=/tmp/restic-fss-jobs

## Backing up some OCI FSS (same as AWS EFS) NFS folders
FSS=(
"fs-oracle-apps|fs-oracle-apps|.snapshot"           ## backup all exclude .snapshot tree
"fs-app1|fs-app1|.snapshot"                         ## backup all exclude .snapshot tree
"fs-sw|fs-sw/oracle_sw,fs-sw/restic_pkg|.snapshot"  ## backup two folders exclude .snapshot tree
"fs-tifs|fs-tifs|.snapshot,.tif"                  ## backup all exclude .snapshot tree and *.tif files
)

## test commands especially before kicking off large backups
function verify_cmds
{
  f=$1
  restic_cmd=$2
  printf "\n$rundate and cmd: $restic_cmd\n"
}

function backup
{
 f=$1
 restic_cmd=$2

 jobstart=$(date +"%Y-%m-%d-%H%M")

 mkdir $jsonspool/$f
 jsonfile=$jsonspool/$f/$jobstart-restic-backup.json
 printf "$jobstart with cmd: $restic_cmd\n"

 mkdir /mnt/$f
 mount -o ro xx.xx.xx.xx:/$f /mnt/$f

 ## TODO: shell issue with passing exclude from variable. verify exclude .snapshot is working
 ## TODO: not passing *.tif exclude fail?  howto pass *?
 $restic_cmd > $jsonfile

 #cat $jsonfile >> $logname-$f.log
 umount /mnt/$f
 rmdir /mnt/$f

## Using rclone to copy to OCI object storage bucket.
## Note the extra level folder so rclone can simulate 
## a server/20190711-restic.log style.
## Very useful with using minio client to analyze logs.
 rclone copy $jsonspool s3_ash:restic-backup-logs

 rm $jsonfile
 rmdir $jsonspool/$f

 jobfinish=$(date +"%Y-%m-%d-%H%M")
 printf "jobfinish $jobfinish\n"
}

for fss in "${FSS[@]}"; do
 arrFSS=(${fss//|/ })

 folders=""
 f=${arrFSS[0]}
 IFS=',' read -ra folderarr <<< ${arrFSS[1]}
 for folder in ${folderarr[@]};do folders+="/mnt/${folder} "; done

 excludearg=""
 IFS=',' read -ra excludearr <<< ${arrFSS[2]}
 for exclude in ${excludearr[@]};do excludearg+=" --exclude ${exclude}"; done

 backup_cmd="$resticprog -r rclone:$region:restic-$f backup ${folders} $excludearg --json"

## play with verify_cmds first before actual backups
 verify_cmds "$f" "$backup_cmd"
 #backup "$f" "$backup_cmd"
done

Since we have json logs in object storage lets check some of then with minio client.

# cat restic-check-logs.sh
#!/bin/bash

fss=(
 fs-oracle-apps
)

#checkdate="2019-07-11"
checkdate=$(date +"%Y-%m-%d")

for f in ${fss[@]}; do
  echo
  echo
  printf "$f:  "
  name=$(mc find s3-ash/restic-backup-logs/$f -name "*$checkdate*" | head -1)
  if [ -n "$name" ]
  then
    echo $name
    # play with sql --query later
    #mc sql --query "select * from S3Object"  --json-input .message_type=summary s3-ash/restic-backup-logs/$f/2019-07-09-1827-restic-backup.json
    mc cat $name  | jq -r 'select(.message_type=="summary")'
  else
    echo "Fail - no file found"
  fi
done

Example run of minio client against json

# ./restic-check-logs.sh

fs-oracle-apps:  s3-ash/restic-backup-logs/fs-oracle-apps/2019-07-12-0928-restic-backup.json
{
  "message_type": "summary",
  "files_new": 291,
  "files_changed": 1,
  "files_unmodified": 678976,
  "dirs_new": 0,
  "dirs_changed": 1,
  "dirs_unmodified": 0,
  "data_blobs": 171,
  "tree_blobs": 2,
  "data_added": 2244824,
  "total_files_processed": 679268,
  "total_bytes_processed": 38808398197,
  "total_duration": 1708.162522559,
  "snapshot_id": "f3e4dc06"
}

Note all of this was done with Oracle Cloud Infrastructure (OCI) object storage. Here are some observations around the OCI S3 compatible object storage.

  1. restic can not reach both us-ashburn-1 and us-phoenix-1 regions natively. s3:<tenant>.compat.objectstorage.us-ashburn-1.oraclecloud.com works but s3:<tenant>.compat.objectstorage.us-phoenix-1.oraclecloud.com does NOT work. Since restic can use rclone I am using rclone to access OCI object storage and rclone can reach both regions.
  2. rclone can reach both regions.
  3. minio command line client (mc) have the same issue as restic. Can reach us-ashburn-1 but not us-phoenix-1.
  4. minio python API can connect to us-ashburn-1 but shows an empty bucket list.

Comments Off on Restic scripting plus jq and minio client
comments

Nov 10

Restic and Oracle OCI Object Storage

It seems that after some time went by the S3 compatible object storage OCI interface can now work with restic directly and not necessary to use rclone. Tests a few months ago this did not work.

Using S3 directly mean we may not have this issue we see when using restic + rclone:
rclone: 2018/11/02 20:04:16 ERROR : data/fa/fadbb4f1d9172a4ecb591ddf5677b0889c16a8b98e5e3329d63aa152e235602e: Didn't finish writing GET request (wrote 9086/15280 bytes): http2: stream closed

This shows how I setup restic to Oracle OCI object storage(no rclone required).

Current restic env pointing to rclone.conf
##########################################

# more /root/.restic-env 
export RESTIC_REPOSITORY="rclone:s3_servers_ashburn:bucket1"
export RESTIC_PASSWORD="blahblah"

# more /root/.config/rclone/rclone.conf 
[s3_servers_phoenix]
type = s3
env_auth = false
access_key_id =  
secret_access_key =  
region = us-phoenix-1
endpoint = <client-id>.compat.objectstorage.us-phoenix-1.oraclecloud.com
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 
[s3_servers_ashburn]
type = s3
env_auth = false
access_key_id =  
secret_access_key = 
region = us-ashburn-1
endpoint = <client-id>.compat.objectstorage.us-ashburn-1.oraclecloud.com
location_constraint =
acl = private
server_side_encryption =

New restic env pointing to S3 style
###################################

# more /root/.restic-env 
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export RESTIC_REPOSITORY="s3:<client-id>.compat.objectstorage.us-ashburn-1.oraclecloud.com/bucket1"
export RESTIC_PASSWORD="blahblah"

# . /root/.restic-env

# /usr/local/bin/restic snapshots
repository 26e5f447 opened successfully, password is correct
ID        Date                 Host             Tags        Directory
----------------------------------------------------------------------
dc9827fd  2018-08-31 21:20:02  server1                      /etc
cb311517  2018-08-31 21:20:04  server1                      /home
f65a3bb5  2018-08-31 21:20:06  server1                      /var
{...}
----------------------------------------------------------------------
36 snapshots

Comments Off on Restic and Oracle OCI Object Storage
comments

Aug 07

Object Storage with Duplicity and Rclone

At this point I prefer using restic for my object storage backup needs but since I did a POC for duplicity and specifically using rclone with duplicity I am writing down my notes. A good description of duplicity and restic here:

https://www.backblaze.com/blog/backing-linux-backblaze-b2-duplicity-restic/
We’re highlighting Duplicity and Restic because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

Since I am doing my tests with Oracle Cloud Infrastructure (OCI) Object Storage and so far it's Amazon S3 Compatibility Interface does not work out of the box with most tools except with rclone, I am using rclone as a backend. With restic using rclone as a back-end worked pretty smooth but duplicity does not have good rclone support so I used a python back-end written by Francesco Magno and hosted here: https://github.com/GilGalaad/duplicity-rclone/blob/master/README.md

I had a couple issues with getting duplicity to work with this back-end so I will show how to get around it.

First:
1. Make sure rclone is working with your rclone config and can at least "ls" your bucket.
2. Setup a gpg key.
3. Copy rclonebackend.py to duplicity backends folder. In my case /usr/lib64/python2.7/site-packages/duplicity/backends

# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 /tmp rclone://mycompany-POC-phoenix:dr01-duplicity
InvalidBackendURL: Syntax error (port) in: rclone://mycompany-POC-phoenix:dr01-duplicity AFalse BNone Cmycompany-POC-phoenix:dr01-duplicity

## Hack backends.py

# diff /usr/lib64/python2.7/site-packages/duplicity/backend.py /tmp/backend.py 
303c303
< if not (self.scheme in ['rsync'] and re.search('::[^:]*$', self.url_string) or (self.scheme in ['rclone']) ): --- >             if not (self.scheme in ['rsync'] and re.search('::[^:]*$', self.url_string)):
# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 /tmp rclone://mycompany-POC-phoenix:dr01-duplicity
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1533652997.49 (Tue Aug  7 14:43:17 2018)
EndTime 1533653022.35 (Tue Aug  7 14:43:42 2018)
ElapsedTime 24.86 (24.86 seconds)
SourceFiles 50
SourceFileSize 293736179 (280 MB)
NewFiles 50
NewFileSize 136467418 (130 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 50
RawDeltaSize 293723433 (280 MB)
TotalDestinationSizeChange 279406571 (266 MB)
Errors 0
-------------------------------------------------

# rclone ls mycompany-POC-phoenix:dr01-duplicity
  1773668 duplicity-full-signatures.20180807T144317Z.sigtar.gpg
      485 duplicity-full.20180807T144317Z.manifest.gpg
209763240 duplicity-full.20180807T144317Z.vol1.difftar.gpg
 69643331 duplicity-full.20180807T144317Z.vol2.difftar.gpg

# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 collection-status rclone://mycompany-POC-phoenix:dr01-duplicity
Last full backup date: Tue Aug  7 14:43:17 2018
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /root/.cache/duplicity/df529824ba5d10f9e31329e440c5efa6

Found 0 secondary backup chains.

Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Tue Aug  7 14:43:17 2018
Chain end time: Tue Aug  7 14:50:12 2018
Number of contained backup sets: 2
Total number of contained volumes: 3
 Type of backup set:                            Time:      Num volumes:
                Full         Tue Aug  7 14:43:17 2018                 2
         Incremental         Tue Aug  7 14:50:12 2018                 1
-------------------------
No orphaned or incomplete backup sets found.

Comments Off on Object Storage with Duplicity and Rclone
comments

Aug 03

Object Storage with Restic and Rclone

I have been playing around with some options to utilize Object Storage for backups. Since I am working on Oracle Cloud Infrastructure (OCI) I am doing my POC using the OCI Object Storage. OCI object storage does have Swift and S3 Compatibility API's to interface with. Of course if you want commercial backups many of them can use object storage as back-ends now so that would be the correct answer. If your needs does not warrant commercial backups solutions you can try several things. A few options I played with.

1. Bareos server/client with the object storage droplet. Not working reliably. Too experimental with droplet?
2. Rclone and using tar to pipe with rclone's rcat feature. This works well but is not a backup solution as in incrementals etc.
3. Duplicati. In my case using rclone as connection since S3 interface on OCI did not work.
4. Dupliciti. Could not get this one to work to S3 interface on OCI.
5. Restic. In my case using rclone as connection since S3 interface on OCI did not work.

So far duplicati was not bad but had some bugs. It is beta software so probably should expect problems. Restic is doing a good job so far and I show a recipe of my POC below:

Out of scope is setting up rclone, rclone.conf. Make sure you test that rclone is accessing your bucket first.

Restic binary

# wget https://github.com/restic/restic/releases/download/v0.9.1/restic_0.9.1_linux_amd64.bz2
2018-08-03 10:25:10 (3.22 MB/s) - ‘restic_0.9.1_linux_amd64.bz2’ saved [3786622/3786622]
# bunzip2 restic_0.9.1_linux_amd64.bz2 
# mv restic_0.9.1_linux_amd64 /usr/local/bin/
# chmod +x /usr/local/bin/restic_0.9.1_linux_amd64 
# mv /usr/local/bin/restic_0.9.1_linux_amd64 /usr/local/bin/restic
# /usr/local/bin/restic version
restic 0.9.1 compiled with go1.10.3 on linux/amd64

Initialize repo

# rclone ls s3_servers_phoenix:oci02a
# export RESTIC_PASSWORD="WRHYEjblahblah0VWq5qM"
# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a init
created restic repository 2bcf4f5864 at rclone:s3_servers_phoenix:oci02a

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

# rclone ls s3_servers_phoenix:oci02a
      155 config
      458 keys/530a67c4674b9abf6dcc9e7b75c6b319187cb8c3ed91e6db992a3e2cb862af63

Run a backup

# time /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a backup /opt/applmgr/12.2
repository 2bcf4f58 opened successfully, password is correct

Files:       1200934 new,     0 changed,     0 unmodified
Dirs:            2 new,     0 changed,     0 unmodified
Added:      37.334 GiB

processed 1200934 files, 86.311 GiB in 1:31:40
snapshot af4d5598 saved

real	91m40.824s
user	23m4.072s
sys	7m23.715s

# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a snapshots
repository 2bcf4f58 opened successfully, password is correct
ID        Date                 Host              Tags        Directory
----------------------------------------------------------------------
af4d5598  2018-08-03 10:35:45  oci02a              /opt/applmgr/12.2
----------------------------------------------------------------------
1 snapshots

Run second backup

# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a backup /opt/applmgr/12.2
repository 2bcf4f58 opened successfully, password is correct

Files:           0 new,     0 changed, 1200934 unmodified
Dirs:            0 new,     0 changed,     2 unmodified
Added:      0 B  

processed 1200934 files, 86.311 GiB in 47:46
snapshot a158688a saved

Example cron entry

# crontab -l
05 * * * * /usr/local/bin/restic -r servers_phoenix:oci02a backup -q /usr; /usr/local/bin/restic -r servers_phoenix:oci02a forget -q --prune --keep-hourly 2 --keep-daily 7

Comments Off on Object Storage with Restic and Rclone
comments

Mar 29

Rclone and OCI S3 Interface

I am testing rclone to the Oracle Cloud Interface object storage and recording what worked for me.

Note I could not get the swift interface to work with rclone, duplicity or swiftclient yet. Although straightforward curl does work to the swift interface.

rclone configuration generated with rclone config

# cat /root/.config/rclone/rclone.conf
[s3_backups]
type = s3
env_auth = false
access_key_id = ocid1.credential.oc1..a<redacted>ta
secret_access_key = K<redacted>6s=
region = us-ashburn-1
endpoint = <tenancy>.compat.objectstorage.us-ashburn-1.oraclecloud.com
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 

Issue with max-keys. This problem although very difficult to find was also preventing copy/sync of folders although a single file was working. rclone v1.36 was installed form Ubuntu repos and issue resolved with newer version.

# rclone ls s3_backups:repo1
2018/03/29 08:55:44 Failed to ls: InvalidArgument: The 'max-keys' parameter must be between 1 and 1000 (it was 1024) status code: 400, request id: fa704a55-44a8-1146-1b62-688df0366f63

Update and try again.

# curl https://rclone.org/install.sh | sudo bash
[..]
rclone v1.40 has successfully installed.

# rclone -V
rclone v1.40
- os/arch: linux/amd64
- go version: go1.10

# rclone ls s3_backups:repo1
      655 config
       38 hints.3

# rclone copy /root/backup/repo1 s3_backups:repo1

# rclone sync /root/backup/repo1 s3_backups:repo1

# rclone ls s3_backups:repo1
       26 README
      655 config
       38 hints.3
    82138 index.3
  5245384 data/0/1
  3067202 data/0/3

# rclone lsd s3_backups:
          -1 2018-03-27 21:07:11        -1 backups
          -1 2018-03-29 13:39:42        -1 repo1
          -1 2018-03-26 22:23:35        -1 terraform
          -1 2018-03-27 14:34:55        -1 terraform-src

References:
https://rclone.org/docs/
https://docs.us-phoenix-1.oraclecloud.com/api/#/en/s3objectstorage/20160918/
https://blog.selectel.com/rclone-rsync-cloud-storage/

In a future article I will add my testing around BorgBackup + rclone + OCI objectstorage from this interesting idea: https://opensource.com/article/17/10/backing-your-machines-borg

Comments Off on Rclone and OCI S3 Interface
comments