restic option to configure S3 region

If you find yourself relying on restic using rclone to talk to non-default regions you may want to check out the just released restic version 0.9.6. To me it appears to be fixed when working with Oracle Cloud Infrastructure (OCI) object storage. Below shows a test accessing Phoenix endpoint with the new -o option.

# restic -r s3:<tenancy_name>.compat.objectstorage.us-phoenix-1.oraclecloud.com/restic-backups snapshots -o s3.region="us-phoenix-1"
repository <....> opened successfully, password is correct
ID        Time                 Host                          Tags        Paths
----------------------------------------------------------------------------------------
f23784fd  2019-10-27 05:10:02  host01.domain.com  mytag     /etc

OCI VPN Server PriTunl for clients

Sometimes you need more than a bastion for reaching your cloud resources. Bastions are great for SSH and RDP tunneling but really more limited to admins and administration. Of course site to site can be solved with OCI CPE and tunnels between colo/client networks.

There are several options for VPN servers and I use LibreSwan for testing site to site OCI tenancy VPN tunnels. LibreSwan could also work in a case of many users needing access to cloud resources but it is not easy to administer users etc.

So this time I tried a product called pritunl ( https://pritunl.com/ )

You should be able to use normal OpenVPN and I think even IPsec clients to connect. Pritunl also provide clients but ideally you should just be able to use anything generic.

Admin can easily add users and send an import file which includes your cert etc.. For me this worked well under Linux just using the generic network manager openvpn plugin but I need to verify Windows and Macs also.

https://docs.pritunl.com/docs/installation

$ sudo -s
# tee -a /etc/yum.repos.d/mongodb-org-3.4.repo << EOF
> [mongodb-org-3.4]
> name=MongoDB Repository
> baseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.4/x86_64/
> gpgcheck=1
> enabled=1
> gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
> EOF
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc

# tee -a /etc/yum.repos.d/pritunl.repo << EOF
> [pritunl]
> name=Pritunl Repository
> baseurl=https://repo.pritunl.com/stable/yum/centos/7/
> gpgcheck=1
> enabled=1
> EOF
[pritunl]
name=Pritunl Repository
baseurl=https://repo.pritunl.com/stable/yum/centos/7/
gpgcheck=1
enabled=1

# yum -y install epel-release
[snip]
Complete!

# grep disabled /etc/selinux/config 
#     disabled - No SELinux policy is loaded.
SELINUX=disabled

# gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 7568D9BB55FF9E5287D586017AE645C0CF8E292A
# gpg --armor --export 7568D9BB55FF9E5287D586017AE645C0CF8E292A > key.tmp; sudo rpm --import key.tmp; rm -f key.tmp
# yum -y install pritunl mongodb-org

# systemctl start mongod pritunl
# systemctl enable mongod pritunl
Created symlink from /etc/systemd/system/multi-user.target.wants/pritunl.service to /etc/systemd/system/pritunl.service.

Connect to web interface…

# firewall-cmd --zone=public --permanent --add-port=12991/udp
success
# systemctl restart firewalld

On VPN Server Removed 0.0.0.0/0 route and add 10.1.0.0/16
Install network-manager-openvpn on my Linux desktop and import file exported on vpn server
Connect to VPN server

# ping 10.1.1.7
PING 10.1.1.7 (10.1.1.7) 56(84) bytes of data.
64 bytes from 10.1.1.7: icmp_seq=1 ttl=63 time=46.4 ms

$ ssh -I /media/ssh-keys/OBMCS opc@10.1.1.7
Last login: Fri Dec 15 16:50:24 2017

OCI (OBMCS) and Libreswan

Recently I wanted to test the Oracle Cloud Infrastructure(OCI) CPE(Customer Premises Equipment) networking; using an IPsec VPN tunnel.  The online documentation covers quite a few popular vendors like Check Point, Cisco, Fortigate, Juniper, Palo Alto.  Since I did not have quick access to any off the shelf VPN services I used the popular open source software Libreswan.

In addition I wanted to make this work to an OCI tenancy and not just a public VPN server.  It may not necessarily apply to any real world use cases but I wanted to test it.

Link of OCI CPE/IPsec documentation:  https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/configuringCPE.htm?Highlight=ipsec

Below are notes on getting the Libreswan config configured to match what the OCI tunnel requires.  Note that once the VPN link is established you may still need to work on security lists, route tables, routes, DRG’s to pass traffic behind the VPN endpoints.

Endpoint A: OCI tenancy with using CPE/IPsec setup
Endpoint B: OCI tenancy using a Libreswan server in a public subnet.  Of course typically this will be a customer endpoint VPN server in their premises or colo’s.  Also note that an instance on OCI with a public address is not a true public server but hiding behind a firewall, your instance has a non routable address in the Operating System but no public interface.  So the Libreswan is following a kind of NAT setup as you can see on right side being a 10. address.

Start off by setting up CPE(Public IP address), DRG and IPsec tunnel from the OCI console.  In this case the public IP address for the CPE will be the Libreswan Linux server endpoint B. The OCI IPsec tunnel will provide you three IP addresses and shared secrets.  We will just use one of the three for our test.

Install from standard repo: 

[root@vpn01 opc]# yum install openswan lsof

Set some required kernel settings and firewall rules: 

[root@vpn01 opc]# for s in /proc/sys/net/ipv4/conf/*; do echo 0 &gt; $s/send_redirects; echo 0 &gt; $s/accept_redirects; done
[root@vpn01 opc]# cat /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.ens3.rp_filter = 0
#IPSec
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.rp_filter = 0 
net.ipv4.conf.ip_vti0.rp_filter = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.default.log_martians = 0

[root@vpn01 opc]# sysctl -p
[root@vpn01 opc]# firewall-cmd --zone=public --add-port=500/udp --permanent
success
[root@vpn01 opc]# firewall-cmd --zone=public --add-port=4500/tcp --permanent
success
[root@vpn01 opc]# firewall-cmd --zone=public --add-port=4500/udp --permanent
success
[root@vpn01 opc]# firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o eth0 -j MASQUERADE -s 10.0.0.0/16
success

Test a reachable host on a private network behind endpoint B:

[root@vpn01 opc]# ping 10.0.5.7
PING 10.0.5.7 (10.0.5.7) 56(84) bytes of data.
64 bytes from 10.0.5.7: icmp_seq=1 ttl=64 time=0.164 ms

Per Oracle documentation IPsec tunnel requirements as follow:

ISAKMP Policy Options
ISAKMP Protocol version 1
Exchange type: Main mode
Authentication method: pre-shared-keys
Encryption: AES-128-cbc, AES-192-cbc, AES-256-cbc
Authentication algorithm: SHA-256, SHA-384
Diffie-Hellman group: group 1, group 2, group 5
IKE session key lifetime: 28800 seconds (8 hours)

IPSec Policy Options
IPSec protocol: ESP, tunnel-mode
Encryption: AES-128-cbc, AES-192-cbc, AES-256-cbc
Authentication algorithm: HMAC-SHA1-96
IPSec session key lifetime: 3600 seconds (1 hour)
Perfect Forward Secrecy (PFS): enabled, group 5

Setup a new conf and secrets file:

[root@vpn01 opc]# cat /etc/ipsec.d/U.conf
conn V-Testing
  authby=secret
  keyexchange=ike
  ike=aes_cbc256-sha1;modp1536
  ikelifetime=28800s
  #ike-frag=no
  ikev2=no
  #nat-ikev1-method=drafts
  phase2=esp
  phase2alg=aes_cbc256-sha1;modp1536
  pfs=yes
  salifetime=3600s
  sareftrack=no
  #dpdtimeout=10
  #dpddelay=3
  left=1.1.1.1
  leftid=1.1.1.1
  right=10.0.4.3
  rightid=2.2.2.2
  rightnexthop=2.2.2.2
  rightsourceip=10.0.4.3
  leftsubnet=10.60.0.0/16
  rightsubnet=10.0.0.0/16
  auto=start

[root@vpn01 opc]# cat /etc/ipsec.d/U.secrets
1.1.1.1 2.2.2.2 : PSK "place_your_shared_key_here"

[root@vpn01 opc]# systemctl start ipsec
[root@vpn01 opc]# systemctl enable ipsec
[root@vpn01 opc]# ipsec verify
Verifying installed system and configuration files
...

For reference some initial pluto.log entries used during debugging to get the options matched to OCI. Plus reference links:
https://libreswan.org/man/ipsec.conf.5.html
https://tools.ietf.org/html/rfc3526

initiating Quick Mode PSK+ENCRYPT+TUNNEL+PFS+UP+IKEV1_ALLOW+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO {using isakmp#15 msgid:08137451 proposal=AES(12)_128-SHA1(2) pfsgroup=MODP1024}

000 "v6neighbor-hole-out":   policy: PFS+IKEV1_ALLOW+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+PASS+NEVER_NEGOTIATE;

000 "V-Testing":   policy: PSK+ENCRYPT+TUNNEL+PFS+IKEV1_ALLOW+ESN_NO;
000 "V-Testing":   conn_prio: 30,30; interface: ens3; metric: 0; mtu: unset; sa_prio:auto; sa_tfc:none;
000 "V-Testing":   nflog-group: unset; mark: unset; vti-iface:unset; vti-routing:no; vti-shared:no;
000 "V-Testing":   dpd: action:hold; delay:3; timeout:10; nat-t: encaps:auto; nat_keepalive:yes; ikev1_natt:drafts
000 "V-Testing":   newest ISAKMP SA: #0; newest IPsec SA: #0;
000 "V-Testing":   IKE algorithms wanted: AES_CBC(7)_256-SHA1(2)-MODP1024(2)
000 "V-Testing":   IKE algorithms found:  AES_CBC(7)_256-SHA1(2)-MODP1024(2)
000 "V-Testing":   ESP algorithms wanted: AES(12)_256-SHA1(2); pfsgroup=MODP1024(2)
000 "V-Testing":   ESP algorithms loaded: AES(12)_256-SHA1(2)

000 Total IPsec connections: loaded 1, active 1
000  
000 State Information: DDoS cookies not required, Accepting new IKE connections
000 IKE SAs: total(1), half-open(0), open(0), authenticated(1), anonymous(0)
000 IPsec SAs: total(1), authenticated(1), anonymous(0)

For reference some pluto.log entries used during debugging:

[root@vpn01 opc]# tail -f /var/log/pluto.log 
Nov  4 18:41:17: | setup callback for interface lo:500 fd 19
Nov  4 18:41:17: | setup callback for interface ens3:4500 fd 18
Nov  4 18:41:17: | setup callback for interface ens3:500 fd 17
Nov  4 18:41:17: loading secrets from "/etc/ipsec.secrets"
Nov  4 18:41:17: loading secrets from "/etc/ipsec.d/U.secrets"
Nov  4 18:41:17: "V-Testing": route-client output: /usr/libexec/ipsec/_updown.netkey: doroute "ip route replace 10.60.0.0/16 via 2.2.2.2 dev ens3  src 10.0.4.3" failed (RTNETLINK answers: Network is unreachable)
Nov  4 18:41:17: "V-Testing" #1: initiating Main Mode
Nov  4 18:41:18: assign_holdpass() delete_bare_shunt() failed
Nov  4 18:41:18: initiate_ondemand_body() failed to install negotiation_shunt,
Nov  4 18:41:18: initiate on demand from 10.0.4.3:8 to 10.60.1.2:0 proto=1 because: acquire

Not sure if this route was necessary or not but showing for reference.  Pretty sure do not need this:

root@vpn01 opc]# route add -net 10.60.0.0/16 gw 10.0.4.1
[root@vpn01 opc]# ip route
default via 10.0.4.1 dev ens3 
10.0.4.0/24 dev ens3 proto kernel scope link src 10.0.4.3 
10.60.0.0/16 via 10.0.4.1 dev ens3 
169.254.0.0/16 dev ens3 proto static scope link 
169.254.0.0/16 dev ens3 scope link metric 1002 

Some ping tests for reference showing passing traffic:


[root@vpn01 opc]# ifconfig ens3
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 10.0.4.3  netmask 255.255.255.0  broadcast 10.0.4.255

[root@vpn01 opc]# ping 10.60.1.2
PING 10.60.1.2 (10.60.1.2) 56(84) bytes of data.
64 bytes from 10.60.1.2: icmp_seq=1 ttl=64 time=0.460 ms

[root@gw01 opc]# ifconfig ens3
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000

[root@gw01 opc]# ping 10.0.4.3
PING 10.0.4.3 (10.0.4.3) 56(84) bytes of data.
64 bytes from 10.0.4.3: icmp_seq=1 ttl=64 time=0.424 ms

After tuning security lists, route tables, DRG’s, routes etc some ping tests for reference showing passing traffic on private subnets behind endpoints:

[root@client01 opc]# ifconfig ens3
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 10.0.5.12  netmask 255.255.255.0  broadcast 10.0.5.255

[root@client01 opc]# ping 10.60.1.2
PING 10.60.1.2 (10.60.1.2) 56(84) bytes of data.
64 bytes from 10.60.1.2: icmp_seq=1 ttl=63 time=0.566 ms

[root@gw01 opc]# ifconfig ens3
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 10.60.1.2  netmask 255.255.255.0  broadcast 10.60.1.255

[root@gw01 opc]# ping 10.0.5.12
PING 10.0.5.12 (10.0.5.12) 56(84) bytes of data.
64 bytes from 10.0.5.12: icmp_seq=1 ttl=63 time=0.638 ms

Compute Instance in OCI using terraform

Begin Update 9/26/17.

It is possible to reference the subnet as follow:

subnet_id = "${oci_core_subnet.PrivSubnetAD1.id}"

My problem and workaround originally is because I am using modules. I would prefer modules so I can organize and subdivide better but it caused above to not work. Plus subdividing the work may cause more issues around losing access to variables/references.
End Update 9/26/17

Most likely there is a better way to do this but since I spent some time on it I am jotting down my notes. When creating compute instances with terraform in Oracle Cloud Infrastructure(Oracle Bare Metal Services) you have to specify the subnet_id. The id or ocid as called in OCI is a long unique string.

So if you are looking at automating the terraform build you may struggle with not knowing subnet_id when creating a compute instance. As I said there may be better ways to do this and maybe the AWS plugin for terraform handles this already. I did this with the OCI plugin and came up with the below script and using some custom API calls to create terraform variables of the subnets. Just showing the bash script for some ideas on flow and how it glues together. The terraform source and api calls not shown here.

#!/bin/bash

build_root="/home/rrosso/.terraform.d/MY-PROTO"
api_snippets_location="/home/rrosso/oraclebmc"
terraform_bin="/home/rrosso/.terraform.d/terraform"

function pause(){
   read -p "$*"
}

## Check if environment variables are set. Make very sure correct tenancy, compartment etc
## Or if hard coding these look in main.tf files for correct tenancy, compartment etc
env | grep TF_

pause "Press [Enter] key if Variables look ok and if ready to proceed with networking build"

cd "$build_root/networking"
$terraform_bin apply

pause "Press [Enter] key if networking build went ok and ready to generate a list of subnet ocid's"

cd "$api_snippets_location"
python get_subnet_ocids.py -t ocid1.tenancy.oc1..<cut_long_number_here> -c MYPROTO -v DEV >> $build_root/compute/webservers/variables.tf 
python get_subnet_ocids.py -t ocid1.tenancy.oc1..<cut_long_number_here> -c MYPROTO -v DEV >> $build_root/compute/bastions/variables.tf 

pause "Press [Enter] key if variables.tf looks ok with new subnet ocid's and ready to proceed building compute instances"

cd "$build_root/compute"
$terraform_bin apply