restic option to configure S3 region

If you find yourself relying on restic using rclone to talk to non-default regions you may want to check out the just released restic version 0.9.6. To me it appears to be fixed when working with Oracle Cloud Infrastructure (OCI) object storage. Below shows a test accessing Phoenix endpoint with the new -o option.

# restic -r s3:<tenancy_name> snapshots -o s3.region="us-phoenix-1"
repository <....> opened successfully, password is correct
ID        Time                 Host                          Tags        Paths
f23784fd  2019-10-27 05:10:02  mytag     /etc

Oracle OCI CLI Query

Some bash snippets of using –query, jq and interacting with Bash to manipulate into variables.

Collect boot volume’s id

SRCBOOTVOLID=$(oci --profile $profile bv boot-volume list --compartment-id "$source_compartment" --availability-domain "$source_ad" --query "data [?\"display-name\" == '$instance_name (Boot Volume)'].{id:id}" | jq -r '.[] | .id')

Collect instance ocid

INSTANCEID=$(oci --profile $profile compute instance launch --availability-domain $target_ad --compartment-id $sandbox_compartment --shape VM.Standard1.1 --display-name "burner-$instance_name-instance-for-custom-image" --source-boot-volume-id $BOOTVOLID --wait-for-state RUNNING --subnet-id $sandbox_subnetid --query "data .{id:id}" | jq -r '. | .id')

Stop instance and collect the id (or whatever you need from the json)

STOPPEDID=$(oci --profile $profile compute instance action --action STOP --instance-id $INSTANCEID --wait-for-state STOPPED --query "data .{id:id}" | jq -r '. | .id')

Collect the work-request-id to monitor in a loop after I export a custom image to object storage. Note in the query the field I need is NOT in the data section.

WORKREQUESTID=$(oci --profile $profile compute image export to-object --image-id $IMAGEID --namespace faketenancy --bucket-name DR-Images --name $today-$instance_name-custom-image-object --query '"opc-work-request-id"' --raw-output)

while [ "$RESULT" != "SUCCEEDED" ]
  RESULT=$(oci --profile myprofile work-requests work-request get --work-request-id $WORKREQUESTID --query "data .{status:status}" | jq -r '. | .status')
  echo "running export job and $RESULT checking every 2 mins"
  sleep 2m

Restic scripting plus jq and minio client

I am jotting down some recent work on scripting restic and also using restic’s json output with jq and mc (minio client).

NOTE this is not production just example. Use at your own risk. These are edited by hand from real working scripts but since they are edited they will probably have typos etc in them. Again just examples!

Example backup script. Plus uploading json output to an object storage bucket for analysis later.

# cat
source /root/.restic-keys
#rcloneargs="serve restic --stdio --b2-hard-delete --cache-workers 64 --transfers 64 --retries 21"
rundate=$(date +"%Y-%m-%d-%H%M")
logyear=$(date +"%Y")
logmonth=$(date +"%m")

## Backing up some OCI FSS (same as AWS EFS) NFS folders
"fs-oracle-apps|fs-oracle-apps|.snapshot"           ## backup all exclude .snapshot tree
"fs-app1|fs-app1|.snapshot"                         ## backup all exclude .snapshot tree
"fs-sw|fs-sw/oracle_sw,fs-sw/restic_pkg|.snapshot"  ## backup two folders exclude .snapshot tree
"fs-tifs|fs-tifs|.snapshot,.tif"                  ## backup all exclude .snapshot tree and *.tif files

## test commands especially before kicking off large backups
function verify_cmds
  printf "\n$rundate and cmd: $restic_cmd\n"

function backup

 jobstart=$(date +"%Y-%m-%d-%H%M")

 mkdir $jsonspool/$f
 printf "$jobstart with cmd: $restic_cmd\n"

 mkdir /mnt/$f
 mount -o ro xx.xx.xx.xx:/$f /mnt/$f

 ## TODO: shell issue with passing exclude from variable. verify exclude .snapshot is working
 ## TODO: not passing *.tif exclude fail?  howto pass *?
 $restic_cmd > $jsonfile

 #cat $jsonfile >> $logname-$f.log
 umount /mnt/$f
 rmdir /mnt/$f

## Using rclone to copy to OCI object storage bucket.
## Note the extra level folder so rclone can simulate 
## a server/20190711-restic.log style.
## Very useful with using minio client to analyze logs.
 rclone copy $jsonspool s3_ash:restic-backup-logs

 rm $jsonfile
 rmdir $jsonspool/$f

 jobfinish=$(date +"%Y-%m-%d-%H%M")
 printf "jobfinish $jobfinish\n"

for fss in "${FSS[@]}"; do
 arrFSS=(${fss//|/ })

 IFS=',' read -ra folderarr <<< ${arrFSS[1]}
 for folder in ${folderarr[@]};do folders+="/mnt/${folder} "; done

 IFS=',' read -ra excludearr <<< ${arrFSS[2]}
 for exclude in ${excludearr[@]};do excludearg+=" --exclude ${exclude}"; done

 backup_cmd="$resticprog -r rclone:$region:restic-$f backup ${folders} $excludearg --json"

## play with verify_cmds first before actual backups
 verify_cmds "$f" "$backup_cmd"
 #backup "$f" "$backup_cmd"

Since we have json logs in object storage lets check some of then with minio client.

# cat


checkdate=$(date +"%Y-%m-%d")

for f in ${fss[@]}; do
  printf "$f:  "
  name=$(mc find s3-ash/restic-backup-logs/$f -name "*$checkdate*" | head -1)
  if [ -n "$name" ]
    echo $name
    # play with sql --query later
    #mc sql --query "select * from S3Object"  --json-input .message_type=summary s3-ash/restic-backup-logs/$f/2019-07-09-1827-restic-backup.json
    mc cat $name  | jq -r 'select(.message_type=="summary")'
    echo "Fail - no file found"

Example run of minio client against json

# ./

fs-oracle-apps:  s3-ash/restic-backup-logs/fs-oracle-apps/2019-07-12-0928-restic-backup.json
  "message_type": "summary",
  "files_new": 291,
  "files_changed": 1,
  "files_unmodified": 678976,
  "dirs_new": 0,
  "dirs_changed": 1,
  "dirs_unmodified": 0,
  "data_blobs": 171,
  "tree_blobs": 2,
  "data_added": 2244824,
  "total_files_processed": 679268,
  "total_bytes_processed": 38808398197,
  "total_duration": 1708.162522559,
  "snapshot_id": "f3e4dc06"

Note all of this was done with Oracle Cloud Infrastructure (OCI) object storage. Here are some observations around the OCI S3 compatible object storage.

  1. restic can not reach both us-ashburn-1 and us-phoenix-1 regions natively. s3:<tenant> works but s3:<tenant> does NOT work. Since restic can use rclone I am using rclone to access OCI object storage and rclone can reach both regions.
  2. rclone can reach both regions.
  3. minio command line client (mc) have the same issue as restic. Can reach us-ashburn-1 but not us-phoenix-1.
  4. minio python API can connect to us-ashburn-1 but shows an empty bucket list.

Restic and Oracle OCI Object Storage

It seems that after some time went by the S3 compatible object storage OCI interface can now work with restic directly and not necessary to use rclone. Tests a few months ago this did not work.

Using S3 directly mean we may not have this issue we see when using restic + rclone:
rclone: 2018/11/02 20:04:16 ERROR : data/fa/fadbb4f1d9172a4ecb591ddf5677b0889c16a8b98e5e3329d63aa152e235602e: Didn’t finish writing GET request (wrote 9086/15280 bytes): http2: stream closed

This shows how I setup restic to Oracle OCI object storage(no rclone required).

Current restic env pointing to rclone.conf

# more /root/.restic-env 
export RESTIC_REPOSITORY="rclone:s3_servers_ashburn:bucket1"
export RESTIC_PASSWORD="blahblah"

# more /root/.config/rclone/rclone.conf 
type = s3
env_auth = false
access_key_id =  
secret_access_key =  
region = us-phoenix-1
endpoint = <client-id>
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 
type = s3
env_auth = false
access_key_id =  
secret_access_key = 
region = us-ashburn-1
endpoint = <client-id>
location_constraint =
acl = private
server_side_encryption =

New restic env pointing to S3 style

# more /root/.restic-env 
export RESTIC_REPOSITORY="s3:<client-id>"
export RESTIC_PASSWORD="blahblah"

# . /root/.restic-env

# /usr/local/bin/restic snapshots
repository 26e5f447 opened successfully, password is correct
ID        Date                 Host             Tags        Directory
dc9827fd  2018-08-31 21:20:02  server1                      /etc
cb311517  2018-08-31 21:20:04  server1                      /home
f65a3bb5  2018-08-31 21:20:06  server1                      /var
36 snapshots

Object Storage with Duplicity and Rclone

At this point I prefer using restic for my object storage backup needs but since I did a POC for duplicity and specifically using rclone with duplicity I am writing down my notes. A good description of duplicity and restic here:

Backing Up Linux to Backblaze B2 with Duplicity and Restic

We’re highlighting Duplicity and Restic because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

Since I am doing my tests with Oracle Cloud Infrastructure (OCI) Object Storage and so far it’s Amazon S3 Compatibility Interface does not work out of the box with most tools except with rclone, I am using rclone as a backend. With restic using rclone as a back-end worked pretty smooth but duplicity does not have good rclone support so I used a python back-end written by Francesco Magno and hosted here:

I had a couple issues with getting duplicity to work with this back-end so I will show how to get around it.

1. Make sure rclone is working with your rclone config and can at least “ls” your bucket.
2. Setup a gpg key.
3. Copy to duplicity backends folder. In my case /usr/lib64/python2.7/site-packages/duplicity/backends

# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 /tmp rclone://mycompany-POC-phoenix:dr01-duplicity
InvalidBackendURL: Syntax error (port) in: rclone://mycompany-POC-phoenix:dr01-duplicity AFalse BNone Cmycompany-POC-phoenix:dr01-duplicity

## Hack

# diff /usr/lib64/python2.7/site-packages/duplicity/ /tmp/ 
< if not (self.scheme in ['rsync'] and'::[^:]*$', self.url_string) or (self.scheme in ['rclone']) ): --- >             if not (self.scheme in ['rsync'] and'::[^:]*$', self.url_string)):
# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 /tmp rclone://mycompany-POC-phoenix:dr01-duplicity
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1533652997.49 (Tue Aug  7 14:43:17 2018)
EndTime 1533653022.35 (Tue Aug  7 14:43:42 2018)
ElapsedTime 24.86 (24.86 seconds)
SourceFiles 50
SourceFileSize 293736179 (280 MB)
NewFiles 50
NewFileSize 136467418 (130 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 50
RawDeltaSize 293723433 (280 MB)
TotalDestinationSizeChange 279406571 (266 MB)
Errors 0

# rclone ls mycompany-POC-phoenix:dr01-duplicity
  1773668 duplicity-full-signatures.20180807T144317Z.sigtar.gpg
      485 duplicity-full.20180807T144317Z.manifest.gpg
209763240 duplicity-full.20180807T144317Z.vol1.difftar.gpg
 69643331 duplicity-full.20180807T144317Z.vol2.difftar.gpg

# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 collection-status rclone://mycompany-POC-phoenix:dr01-duplicity
Last full backup date: Tue Aug  7 14:43:17 2018
Collection Status
Connecting with backend: BackendWrapper
Archive dir: /root/.cache/duplicity/df529824ba5d10f9e31329e440c5efa6

Found 0 secondary backup chains.

Found primary backup chain with matching signature chain:
Chain start time: Tue Aug  7 14:43:17 2018
Chain end time: Tue Aug  7 14:50:12 2018
Number of contained backup sets: 2
Total number of contained volumes: 3
 Type of backup set:                            Time:      Num volumes:
                Full         Tue Aug  7 14:43:17 2018                 2
         Incremental         Tue Aug  7 14:50:12 2018                 1
No orphaned or incomplete backup sets found.

Object Storage with Restic and Rclone

I have been playing around with some options to utilize Object Storage for backups. Since I am working on Oracle Cloud Infrastructure (OCI) I am doing my POC using the OCI Object Storage. OCI object storage does have Swift and S3 Compatibility API’s to interface with. Of course if you want commercial backups many of them can use object storage as back-ends now so that would be the correct answer. If your needs does not warrant commercial backups solutions you can try several things. A few options I played with.

1. Bareos server/client with the object storage droplet. Not working reliably. Too experimental with droplet?
2. Rclone and using tar to pipe with rclone’s rcat feature. This works well but is not a backup solution as in incrementals etc.
3. Duplicati. In my case using rclone as connection since S3 interface on OCI did not work.
4. Dupliciti. Could not get this one to work to S3 interface on OCI.
5. Restic. In my case using rclone as connection since S3 interface on OCI did not work.

So far duplicati was not bad but had some bugs. It is beta software so probably should expect problems. Restic is doing a good job so far and I show a recipe of my POC below:

Out of scope is setting up rclone, rclone.conf. Make sure you test that rclone is accessing your bucket first.

Restic binary

# wget
2018-08-03 10:25:10 (3.22 MB/s) - ‘restic_0.9.1_linux_amd64.bz2’ saved [3786622/3786622]
# bunzip2 restic_0.9.1_linux_amd64.bz2 
# mv restic_0.9.1_linux_amd64 /usr/local/bin/
# chmod +x /usr/local/bin/restic_0.9.1_linux_amd64 
# mv /usr/local/bin/restic_0.9.1_linux_amd64 /usr/local/bin/restic
# /usr/local/bin/restic version
restic 0.9.1 compiled with go1.10.3 on linux/amd64

Initialize repo

# rclone ls s3_servers_phoenix:oci02a
# export RESTIC_PASSWORD="WRHYEjblahblah0VWq5qM"
# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a init
created restic repository 2bcf4f5864 at rclone:s3_servers_phoenix:oci02a

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

# rclone ls s3_servers_phoenix:oci02a
      155 config
      458 keys/530a67c4674b9abf6dcc9e7b75c6b319187cb8c3ed91e6db992a3e2cb862af63

Run a backup

# time /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a backup /opt/applmgr/12.2
repository 2bcf4f58 opened successfully, password is correct

Files:       1200934 new,     0 changed,     0 unmodified
Dirs:            2 new,     0 changed,     0 unmodified
Added:      37.334 GiB

processed 1200934 files, 86.311 GiB in 1:31:40
snapshot af4d5598 saved

real	91m40.824s
user	23m4.072s
sys	7m23.715s

# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a snapshots
repository 2bcf4f58 opened successfully, password is correct
ID        Date                 Host              Tags        Directory
af4d5598  2018-08-03 10:35:45  oci02a              /opt/applmgr/12.2
1 snapshots

Run second backup

# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a backup /opt/applmgr/12.2
repository 2bcf4f58 opened successfully, password is correct

Files:           0 new,     0 changed, 1200934 unmodified
Dirs:            0 new,     0 changed,     2 unmodified
Added:      0 B  

processed 1200934 files, 86.311 GiB in 47:46
snapshot a158688a saved

Example cron entry

# crontab -l
05 * * * * /usr/local/bin/restic -r servers_phoenix:oci02a backup -q /usr; /usr/local/bin/restic -r servers_phoenix:oci02a forget -q --prune --keep-hourly 2 --keep-daily 7

Rclone and OCI S3 Interface

I am testing rclone to the Oracle Cloud Interface object storage and recording what worked for me.

Note I could not get the swift interface to work with rclone, duplicity or swiftclient yet. Although straightforward curl does work to the swift interface.

rclone configuration generated with rclone config

# cat /root/.config/rclone/rclone.conf
type = s3
env_auth = false
access_key_id = ocid1.credential.oc1..a<redacted>ta
secret_access_key = K<redacted>6s=
region = us-ashburn-1
endpoint = <tenancy>
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 

Issue with max-keys. This problem although very difficult to find was also preventing copy/sync of folders although a single file was working. rclone v1.36 was installed form Ubuntu repos and issue resolved with newer version.

# rclone ls s3_backups:repo1
2018/03/29 08:55:44 Failed to ls: InvalidArgument: The 'max-keys' parameter must be between 1 and 1000 (it was 1024) status code: 400, request id: fa704a55-44a8-1146-1b62-688df0366f63

Update and try again.

# curl | sudo bash
rclone v1.40 has successfully installed.

# rclone -V
rclone v1.40
- os/arch: linux/amd64
- go version: go1.10

# rclone ls s3_backups:repo1
      655 config
       38 hints.3

# rclone copy /root/backup/repo1 s3_backups:repo1

# rclone sync /root/backup/repo1 s3_backups:repo1

# rclone ls s3_backups:repo1
       26 README
      655 config
       38 hints.3
    82138 index.3
  5245384 data/0/1
  3067202 data/0/3

# rclone lsd s3_backups:
          -1 2018-03-27 21:07:11        -1 backups
          -1 2018-03-29 13:39:42        -1 repo1
          -1 2018-03-26 22:23:35        -1 terraform
          -1 2018-03-27 14:34:55        -1 terraform-src


Rclone: Rsync for Cloud Storage

In a future article I will add my testing around BorgBackup + rclone + OCI objectstorage from this interesting idea:

Python3 and pip

I am converting some scripts to python3 and noticed the pip modules in use for python2 need to be added for python3. I am not using virtualenv so below is my fix on Ubuntu 17.10.

Missing module oci.

$ python3 -t ocid1.tenancy.oc1..aa...mn55ca
Traceback (most recent call last):
  File "", line 14, in <module>
    import oci,optparse,os
ModuleNotFoundError: No module named 'oci'

Python2 module is there.

$ pip list --format=columns | grep oci
oci             1.3.14 

Ubuntu has python3-pip

$ sudo apt install python3-pip
$ pip3 install oci
$ pip3 list --format=columns | grep oci
oci                   1.3.14   

Check my converted script.

$ python3 -t ocid1.tenancy.oc1..aaaaaa...5ca
OCI Details: 0.9.7

OCI VPN Server PriTunl for clients

Sometimes you need more than a bastion for reaching your cloud resources. Bastions are great for SSH and RDP tunneling but really more limited to admins and administration. Of course site to site can be solved with OCI CPE and tunnels between colo/client networks.

There are several options for VPN servers and I use LibreSwan for testing site to site OCI tenancy VPN tunnels. LibreSwan could also work in a case of many users needing access to cloud resources but it is not easy to administer users etc.

So this time I tried a product called pritunl ( )

You should be able to use normal OpenVPN and I think even IPsec clients to connect. Pritunl also provide clients but ideally you should just be able to use anything generic.

Admin can easily add users and send an import file which includes your cert etc.. For me this worked well under Linux just using the generic network manager openvpn plugin but I need to verify Windows and Macs also.

$ sudo -s
# tee -a /etc/yum.repos.d/mongodb-org-3.4.repo << EOF
> [mongodb-org-3.4]
> name=MongoDB Repository
> baseurl=
> gpgcheck=1
> enabled=1
> gpgkey=
name=MongoDB Repository

# tee -a /etc/yum.repos.d/pritunl.repo << EOF
> [pritunl]
> name=Pritunl Repository
> baseurl=
> gpgcheck=1
> enabled=1
name=Pritunl Repository

# yum -y install epel-release

# grep disabled /etc/selinux/config 
#     disabled - No SELinux policy is loaded.

# gpg --keyserver hkp:// --recv-keys 7568D9BB55FF9E5287D586017AE645C0CF8E292A
# gpg --armor --export 7568D9BB55FF9E5287D586017AE645C0CF8E292A > key.tmp; sudo rpm --import key.tmp; rm -f key.tmp
# yum -y install pritunl mongodb-org

# systemctl start mongod pritunl
# systemctl enable mongod pritunl
Created symlink from /etc/systemd/system/ to /etc/systemd/system/pritunl.service.

Connect to web interface…

# firewall-cmd --zone=public --permanent --add-port=12991/udp
# systemctl restart firewalld

On VPN Server Removed route and add
Install network-manager-openvpn on my Linux desktop and import file exported on vpn server
Connect to VPN server

# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=46.4 ms

$ ssh -I /media/ssh-keys/OBMCS opc@
Last login: Fri Dec 15 16:50:24 2017

OCI (OBMCS) and Libreswan

Recently I wanted to test the Oracle Cloud Infrastructure(OCI) CPE(Customer Premises Equipment) networking; using an IPsec VPN tunnel.  The online documentation covers quite a few popular vendors like Check Point, Cisco, Fortigate, Juniper, Palo Alto.  Since I did not have quick access to any off the shelf VPN services I used the popular open source software Libreswan.

In addition I wanted to make this work to an OCI tenancy and not just a public VPN server.  It may not necessarily apply to any real world use cases but I wanted to test it.

Link of OCI CPE/IPsec documentation:

Below are notes on getting the Libreswan config configured to match what the OCI tunnel requires.  Note that once the VPN link is established you may still need to work on security lists, route tables, routes, DRG’s to pass traffic behind the VPN endpoints.

Endpoint A: OCI tenancy with using CPE/IPsec setup
Endpoint B: OCI tenancy using a Libreswan server in a public subnet.  Of course typically this will be a customer endpoint VPN server in their premises or colo’s.  Also note that an instance on OCI with a public address is not a true public server but hiding behind a firewall, your instance has a non routable address in the Operating System but no public interface.  So the Libreswan is following a kind of NAT setup as you can see on right side being a 10. address.

Start off by setting up CPE(Public IP address), DRG and IPsec tunnel from the OCI console.  In this case the public IP address for the CPE will be the Libreswan Linux server endpoint B. The OCI IPsec tunnel will provide you three IP addresses and shared secrets.  We will just use one of the three for our test.

Install from standard repo: 

[root@vpn01 opc]# yum install openswan lsof

Set some required kernel settings and firewall rules: 

[root@vpn01 opc]# for s in /proc/sys/net/ipv4/conf/*; do echo 0 &gt; $s/send_redirects; echo 0 &gt; $s/accept_redirects; done
[root@vpn01 opc]# cat /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.ens3.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.rp_filter = 0 
net.ipv4.conf.ip_vti0.rp_filter = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.default.log_martians = 0

[root@vpn01 opc]# sysctl -p
[root@vpn01 opc]# firewall-cmd --zone=public --add-port=500/udp --permanent
[root@vpn01 opc]# firewall-cmd --zone=public --add-port=4500/tcp --permanent
[root@vpn01 opc]# firewall-cmd --zone=public --add-port=4500/udp --permanent
[root@vpn01 opc]# firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o eth0 -j MASQUERADE -s

Test a reachable host on a private network behind endpoint B:

[root@vpn01 opc]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.164 ms

Per Oracle documentation IPsec tunnel requirements as follow:

ISAKMP Policy Options
ISAKMP Protocol version 1
Exchange type: Main mode
Authentication method: pre-shared-keys
Encryption: AES-128-cbc, AES-192-cbc, AES-256-cbc
Authentication algorithm: SHA-256, SHA-384
Diffie-Hellman group: group 1, group 2, group 5
IKE session key lifetime: 28800 seconds (8 hours)

IPSec Policy Options
IPSec protocol: ESP, tunnel-mode
Encryption: AES-128-cbc, AES-192-cbc, AES-256-cbc
Authentication algorithm: HMAC-SHA1-96
IPSec session key lifetime: 3600 seconds (1 hour)
Perfect Forward Secrecy (PFS): enabled, group 5

Setup a new conf and secrets file:

[root@vpn01 opc]# cat /etc/ipsec.d/U.conf
conn V-Testing

[root@vpn01 opc]# cat /etc/ipsec.d/U.secrets : PSK "place_your_shared_key_here"

[root@vpn01 opc]# systemctl start ipsec
[root@vpn01 opc]# systemctl enable ipsec
[root@vpn01 opc]# ipsec verify
Verifying installed system and configuration files

For reference some initial pluto.log entries used during debugging to get the options matched to OCI. Plus reference links:

initiating Quick Mode PSK+ENCRYPT+TUNNEL+PFS+UP+IKEV1_ALLOW+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO {using isakmp#15 msgid:08137451 proposal=AES(12)_128-SHA1(2) pfsgroup=MODP1024}


000 "V-Testing":   conn_prio: 30,30; interface: ens3; metric: 0; mtu: unset; sa_prio:auto; sa_tfc:none;
000 "V-Testing":   nflog-group: unset; mark: unset; vti-iface:unset; vti-routing:no; vti-shared:no;
000 "V-Testing":   dpd: action:hold; delay:3; timeout:10; nat-t: encaps:auto; nat_keepalive:yes; ikev1_natt:drafts
000 "V-Testing":   newest ISAKMP SA: #0; newest IPsec SA: #0;
000 "V-Testing":   IKE algorithms wanted: AES_CBC(7)_256-SHA1(2)-MODP1024(2)
000 "V-Testing":   IKE algorithms found:  AES_CBC(7)_256-SHA1(2)-MODP1024(2)
000 "V-Testing":   ESP algorithms wanted: AES(12)_256-SHA1(2); pfsgroup=MODP1024(2)
000 "V-Testing":   ESP algorithms loaded: AES(12)_256-SHA1(2)

000 Total IPsec connections: loaded 1, active 1
000 State Information: DDoS cookies not required, Accepting new IKE connections
000 IKE SAs: total(1), half-open(0), open(0), authenticated(1), anonymous(0)
000 IPsec SAs: total(1), authenticated(1), anonymous(0)

For reference some pluto.log entries used during debugging:

[root@vpn01 opc]# tail -f /var/log/pluto.log 
Nov  4 18:41:17: | setup callback for interface lo:500 fd 19
Nov  4 18:41:17: | setup callback for interface ens3:4500 fd 18
Nov  4 18:41:17: | setup callback for interface ens3:500 fd 17
Nov  4 18:41:17: loading secrets from "/etc/ipsec.secrets"
Nov  4 18:41:17: loading secrets from "/etc/ipsec.d/U.secrets"
Nov  4 18:41:17: "V-Testing": route-client output: /usr/libexec/ipsec/_updown.netkey: doroute "ip route replace via dev ens3  src" failed (RTNETLINK answers: Network is unreachable)
Nov  4 18:41:17: "V-Testing" #1: initiating Main Mode
Nov  4 18:41:18: assign_holdpass() delete_bare_shunt() failed
Nov  4 18:41:18: initiate_ondemand_body() failed to install negotiation_shunt,
Nov  4 18:41:18: initiate on demand from to proto=1 because: acquire

Not sure if this route was necessary or not but showing for reference.  Pretty sure do not need this:

root@vpn01 opc]# route add -net gw
[root@vpn01 opc]# ip route
default via dev ens3 dev ens3 proto kernel scope link src via dev ens3 dev ens3 proto static scope link dev ens3 scope link metric 1002 

Some ping tests for reference showing passing traffic:

[root@vpn01 opc]# ifconfig ens3
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet  netmask  broadcast

[root@vpn01 opc]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.460 ms

[root@gw01 opc]# ifconfig ens3
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000

[root@gw01 opc]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.424 ms

After tuning security lists, route tables, DRG’s, routes etc some ping tests for reference showing passing traffic on private subnets behind endpoints:

[root@client01 opc]# ifconfig ens3
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet  netmask  broadcast

[root@client01 opc]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=0.566 ms

[root@gw01 opc]# ifconfig ens3
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet  netmask  broadcast

[root@gw01 opc]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=0.638 ms