Author Archive

Jan 01

Gnome Desktop Shortcut Exec

Executing commands or scripts from a GNOME desktop shortcut

I have a few scripts that I only periodically execute. For example a backup to a USB not always plugged in or to a computer not always online. For convenience I made some shortcuts on my GNOME desktop. I can easily just run these in a terminal of course but sometimes I just want to quickly click an icon and finish.

There are a few idiosyncracies around executing like this. In general you may have run into passing processes variables and redirection of output issues. In addition to those also "Exec" inside a desktop shortcut adds a few more issues. Read here for the Desktop Entry Specification

For my purposes here are example entries from .desktop files:

Rsync to USB

Exec=gnome-terminal --profile=job-output -e 'sudo /home/rrosso/scripts/rsync-TANKs-2TBUSB.sh'

Update IP address in a remote firewall

Exec=gnome-terminal --profile=job-output -- /bin/sh -c 'cd /TANK/DATA/MySrc ; python3 ip-add-iqonda-aws.py ; sleep 60'

Executing using bash with date in log file:

Exec=bash -c "sudo /root/scripts/zfs-replication.sh -w 1 -t 192.168.1.79 | tee /TANK/backups/logs//bin/date +%%Y-%%m-%%d-desktop-zfs-replicate-192.168.1.79.log"

I also prefer using gnome-terminal so I can format the output on the screen better:

Exec=gnome-terminal --profile=job-output -- bash -c "sudo /root/scripts/zfs-replication.sh -w 1 -t 192.168.1.79 | tee /TANK/backups/logs//bin/date +%%Y-%%m-%%d-desktop-zfs-replicate-192.168.1.79.log ; sleep 60"

Comments Off on Gnome Desktop Shortcut Exec
comments

Dec 22

ZFS Send To Encrypted Volume

Replication from unencrypted to encrypted set

This is a POC testing ZFS (unencrypted zvols) from a server to another server (encrypted zvols). Using an old laptop as a target with the encrypted zvols.

On the target I first replicated existing large datasets I already had from a test, to an encrypted zpool to seed the data.

WARNING:

  • saving the encryption key on the file system is not safe
  • losing your encryption key means losing your data permanently

create encrypted zvol on target

# zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt TANK/ENCRYPTED
Enter passphrase: 
Re-enter passphrase: 

Seed one snapshot source DATA zvol as a test

using 4.57G only

# zfs send -v TANK/DATA@2020-12-19_06.45.01--2w | zfs recv -x encryption TANK/ENCRYPTED/DATA
full send of TANK/DATA@2020-12-19_06.45.01--2w  estimated size is 4.52G
total estimated size is 4.52G
TIME        SENT   SNAPSHOT     TANK/DATA@2020-12-19_06.45.01--2w
08:39:06   34.4M   TANK/DATA@2020-12-19_06.45.01--2w
08:39:07    115M   TANK/DATA@2020-12-19_06.45.01--2w
08:39:08    279M   TANK/DATA@2020-12-19_06.45.01--2w
...
08:40:49   4.52G   TANK/DATA@2020-12-19_06.45.01--2w
08:40:50   4.54G   TANK/DATA@2020-12-19_06.45.01--2w

# zfs list TANK/ENCRYPTED/DATA
NAME                  USED  AVAIL     REFER  MOUNTPOINT
TANK/ENCRYPTED/DATA  4.59G  1017G     4.57G     /TANK/ENCRYPTED/DATA

# zfs list -t snapshot TANK/ENCRYPTED/DATA
NAME                                          USED  AVAIL     REFER     MOUNTPOINT
TANK/ENCRYPTED/DATA@2020-12-19_06.45.01--2w  17.4M      -     4.57G  -

Seed all snapshots source DATA zvol

ends up using 22G

# zfs destroy TANK/ENCRYPTED/DATA
cannot destroy 'TANK/ENCRYPTED/DATA': filesystem has children
use '-r' to destroy the following datasets:
TANK/ENCRYPTED/DATA@2020-12-19_06.45.01--2w

# zfs destroy -r TANK/ENCRYPTED/DATA

# zfs send -R TANK/DATA@2020-12-19_06.45.01--2w | zfs recv -x encryption TANK/ENCRYPTED/DATA

# zfs list TANK/ENCRYPTED/DATA
NAME                  USED  AVAIL     REFER  MOUNTPOINT
TANK/ENCRYPTED/DATA  22.9G   999G     4.57G  /TANK/ENCRYPTED/DATA

# zfs list -t snapshot TANK/ENCRYPTED/DATA | tail -2
TANK/ENCRYPTED/DATA@2020-12-17_06.45.01--2w  11.2M      -     4.57G  -
TANK/ENCRYPTED/DATA@2020-12-19_06.45.01--2w  11.3M      -     4.57G  -

Create ARCHIVE zvol

# zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt TANK/ENCRYPTED/ARCHIVE
Enter passphrase: 
Re-enter passphrase: 

Seed ARCHIVE/MyDocuments

# zfs send -R TANK/ARCHIVE/MyDocuments@2020-12-18_20.15.01--2w | zfs recv -x encryption TANK/ENCRYPTED/ARCHIVE/MyDocuments

Test sending src zvol from source to target (via ssh)

NOTE: Loading the key manually. Will try automatically later.

on target:
# zfs destroy TANK/ENCRYPTED/ARCHIVE/src@2020-12-19_20.15.01--2w

on source:
# zfs send -i TANK/ARCHIVE/src@2020-12-18_20.15.01--2w TANK/ARCHIVE/src@2020-12-19_20.15.01--2w | ssh rrosso@192.168.1.79 sudo zfs recv -x encryption TANK/ENCRYPTED/ARCHIVE/src
cannot receive incremental stream: inherited key must be loaded

on target:
# zfs load-key -r TANK/ENCRYPTED
Enter passphrase for 'TANK/ENCRYPTED': 
Enter passphrase for 'TANK/ENCRYPTED/ARCHIVE': 
2 / 2 key(s) successfully loaded

# zfs rollback TANK/ENCRYPTED/ARCHIVE/src@2020-12-18_20.15.01--2w

on source:
# zfs send -i TANK/ARCHIVE/src@2020-12-18_20.15.01--2w TANK/ARCHIVE/src@2020-12-19_20.15.01--2w | ssh rrosso@192.168.1.79 sudo zfs recv -x encryption TANK/ENCRYPTED/ARCHIVE/src

on target:
# zfs list -t snapshot TANK/ENCRYPTED/ARCHIVE/src | tail -2
TANK/ENCRYPTED/ARCHIVE/src@2020-12-18_20.15.01--2w  1.87M      -      238M  -
TANK/ENCRYPTED/ARCHIVE/src@2020-12-19_20.15.01--2w     0B      -      238M  -

Test using key from a file

NOTE: Do this at your own risk. Key loading should probably be done from remote KMS or something safer.

on target:
# ls -l .zfs-key 
-rw-r--r-- 1 root root 9 Dec 21 12:49 .zfs-key

on source:
# ssh rrosso@192.168.1.79 sudo zfs load-key -L file:///root/.zfs-key TANK/ENCRYPTED
# ssh rrosso@192.168.1.79 sudo zfs load-key -L file:///root/.zfs-key TANK/ENCRYPTED/ARCHIVE

on target:
# zfs get all TANK/ENCRYPTED | egrep "encryption|keylocation|keyformat|encryptionroot|keystatus"
TANK/ENCRYPTED  encryption            aes-256-gcm            -
TANK/ENCRYPTED  keylocation           prompt                 local
TANK/ENCRYPTED  keyformat             passphrase             -
TANK/ENCRYPTED  encryptionroot        TANK/ENCRYPTED         -
TANK/ENCRYPTED  keystatus             available              -

# zfs get all TANK/ENCRYPTED/ARCHIVE | egrep "encryption|keylocation|keyformat|encryptionroot|keystatus"
TANK/ENCRYPTED/ARCHIVE  encryption            aes-256-gcm              -
TANK/ENCRYPTED/ARCHIVE  keylocation           prompt                   local
TANK/ENCRYPTED/ARCHIVE  keyformat             passphrase               -
TANK/ENCRYPTED/ARCHIVE  encryptionroot        TANK/ENCRYPTED/ARCHIVE   -
TANK/ENCRYPTED/ARCHIVE  keystatus             available                -

** now test with my replication (send/recv) script

Comments Off on ZFS Send To Encrypted Volume
comments

Nov 24

Linux WakeOnLAN Issue

Wake On LAN Issue

I had a strange issue where my ZFS and restic backups to an Ubuntu backup server stopped working. The server had an interesting issue that was totally unrelated. It would boot on manual power-on but start shutting down a couple minutes later. This was completely unrelated to WOL and was fixed after I removed micro8ks.

The WOL issue ended up not actually being anything to do with the backup server but instead the source server(desktop01) not sending the magic packet at all. I figured it out by sniffing the backup server ingress using tcpdump. I could see the traffic come in when sending WOL from my ASUS router.

I suspect the desktop01 server which has multiple virtual interfaces, the wakeonlan utility is getting confused where to send out on. Ended up using etherwake instead of wakeonlan from the source server. Etherwake can specify the interface (-i interface) to send out on.

tcpdump when sending WOL magic packet from ASUS router

# tcpdump -i enp1s0 'ether proto 0x0842 or udp port 9'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp1s0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:22:00.549500 08:62:66:96:e8:e0 (oui Unknown) > f4:b5:20:07:60:e0 (oui Unknown), ethertype Unknown (0x0842), length 116: 
    0x0000:  ffff ffff ffff f4b5 2007 60e0 f4b5 2007  ..........`.....
    0x0010:  60e0 f4b5 2007 60e0 f4b5 2007 60e0 f4b5  ..........`...
    0x0020:  2007 60e0 f4b5 2007 60e0 f4b5 2007 60e0  ............`.
    0x0030:  f4b5 2007 60e0 f4b5 2007 60e0 f4b5 2007  ..............
    0x0040:  60e0 f4b5 2007 60e0 f4b5 2007 60e0 f4b5  ..........`...
    0x0050:  2007 60e0 f4b5 2007 60e0 f4b5 2007 60e0  ............`.
    0x0060:  f4b5 2007 60e0                           ....`.

use etherwake on desktop01

# apt install etherwake

# etherwake -i eno1 f4:b5:20:07:60:e0

tcpdump when using etherwake from desktop01

# tcpdump -i enp1s0 'ether proto 0x0842 or udp port 9'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp1s0, link-type EN10MB (Ethernet), capture size 262144 bytes
...
15:45:29.422859 30:5a:3a:57:63:83 (oui Unknown) > f4:b5:20:07:60:e0 (oui Unknown), ethertype Unknown (0x0842), length 116: 
    0x0000:  ffff ffff ffff f4b5 2007 60e0 f4b5 2007  ..........`.....
    0x0010:  60e0 f4b5 2007 60e0 f4b5 2007 60e0 f4b5  ..........`...
    0x0020:  2007 60e0 f4b5 2007 60e0 f4b5 2007 60e0  ............`.
    0x0030:  f4b5 2007 60e0 f4b5 2007 60e0 f4b5 2007  ..............
    0x0040:  60e0 f4b5 2007 60e0 f4b5 2007 60e0 f4b5  ..........`...
    0x0050:  2007 60e0 f4b5 2007 60e0 f4b5 2007 60e0  ............`.
    0x0060:  f4b5 2007 60e0                           ....`.

NOTE:

  • shutdown the backup server using shutdown now
  • sent wake up from desktop01 and it worked

Comments Off on Linux WakeOnLAN Issue
comments

Oct 29

Linux Broadcom Wireless Issue 5.x kernel

Broadcom Wireless Issue

Recent update caused wireless to stop working. Seems like PopOS (20.04 flavor and likely other distros) does not have a new enough bcmwl-kernel-source for 5.6 or 5.8 kernels.

LINKS:

NOTE: I tried to install a different kernel to see if that will work but that showed me the issue at least.

root cause

# apt install linux-image-5.8.0-23-generic
...
make -j4 KERNELRELEASE=5.8.0-23-generic -C /lib/modules/5.8.0-23-generic/build M=/var/lib/dkms/bcmwl/6.30.223.271+bdcom/build....(bad exit status: 2)
ERROR (dkms apport): kernel package linux-headers-5.8.0-23-generic is not supported
Error! Bad return status for module build on kernel: 5.8.0-23-generic (x86_64)
Consult /var/lib/dkms/bcmwl/6.30.223.271+bdcom/build/make.log for more information.

hold a kernel that works just for safety

# dpkg -l | grep linux-image-
ii  linux-image-5.4.0-7642-generic                   5.4.0-7642.46~1598628707~20.04~040157c               amd64        Linux kernel image for version 5.4.0 on 64 bit x86 SMP
ii  linux-image-5.8.0-23-generic                     5.8.0-23.24~20.04.1                                  amd64        Signed kernel image generic
ii  linux-image-5.8.0-7625-generic                   5.8.0-7625.26~1603389471~20.04~f6b125f               amd64        Linux kernel image for version 5.8.0 on 64 bit x86 SMP
ii  linux-image-generic                              5.8.0.7625.26~1603389471~20.04~f6b125f               amd64        Generic Linux kernel image

# echo linux-image-5.4.0-7642-generic hold | dpkg --set-selections

# dpkg -l | grep linux-image-
hi  linux-image-5.4.0-7642-generic                   5.4.0-7642.46~1598628707~20.04~040157c               amd64        Linux kernel image for version 5.4.0 on 64 bit x86 SMP
ii  linux-image-5.8.0-23-generic                     5.8.0-23.24~20.04.1                                  amd64        Signed kernel image generic
ii  linux-image-5.8.0-7625-generic                   5.8.0-7625.26~1603389471~20.04~f6b125f               amd64        Linux kernel image for version 5.8.0 on 64 bit x86 SMP
ii  linux-image-generic                              5.8.0.7625.26~1603389471~20.04~f6b125f               amd64        Generic Linux kernel image

NOTE: set grub with longer timeout, show the boot menu and save last booted item

patches

Looking at the patches it appears we may need 0028? or something for newer than 5.1 kernels?

# ls /usr/src/bcmwl-6.30.223.271+bdcom/patches/
0001-MODULE_LICENSE.patch                  0008-add-support-for-linux-3.9.0.patch                           0015-add-support-for-Linux-3.18.patch                       0022-add-support-for-Linux-4.8.patch
0002-Makefile.patch                        0009-add-support-for-linux-3.10.0.patch                          0016-repair-make-warnings.patch                             0023-add-support-for-Linux-4.11.patch
0003-Make-up-for-missing-init_MUTEX.patch  0010-change-the-network-interface-name-from-eth-to-wlan.patch    0017-add-support-for-Linux-4.0.patch                        0024-add-support-for-Linux-4.12.patch
0004-Add-support-for-Linux-3.2.patch       0011-do-not-define-__devinit-as-__init-in-linux-3.8-as-__.patch  0018-cfg80211_disconnected.patch                            0025-add-support-for-Linux-4.14.patch
0005-add-support-for-linux-3.4.0.patch     0012-add-support-for-Linux-3.15.patch                            0019-broadcom-sta-6.30.223.248-3.18-null-pointer-fix.patch  0026-add-support-for-Linux-4.15.patch
0006-add-support-for-linux-3.8.0.patch     0013-gcc.patch                                                   0020-add-support-for-linux-4.3.patch                        0027-add-support-for-linux-5.1.patch
0007-nl80211-move-scan-API-to-wdev.patch   0014-add-support-for-Linux-3.17.patch                            0021-add-support-for-Linux-4.7.patch

Install Ubuntu 20.10 (groovy) package

Looking at the file list in the newer Ubuntu 20.10 source I see at least a 5.6 patch although I need 5.8.

# wget http://mirrors.kernel.org/ubuntu/pool/restricted/b/bcmwl/bcmwl-kernel-source_6.30.223.271+bdcom-0ubuntu7_amd64.deb
...
2020-10-29 08:14:10 (656 KB/s) - ‘bcmwl-kernel-source_6.30.223.271+bdcom-0ubuntu7_amd64.deb’ saved [1545816/1545816]

# dpkg -i bcmwl-kernel-source_6.30.223.271+bdcom-0ubuntu7_amd64.deb 
(Reading database ... 283701 files and directories currently installed.)
Preparing to unpack bcmwl-kernel-source_6.30.223.271+bdcom-0ubuntu7_amd64.deb ...
Removing all DKMS Modules
Done.
Unpacking bcmwl-kernel-source (6.30.223.271+bdcom-0ubuntu7) over (6.30.223.271+bdcom-0ubuntu5) ...
Setting up bcmwl-kernel-source (6.30.223.271+bdcom-0ubuntu7) ...
Loading new bcmwl-6.30.223.271+bdcom DKMS files...
Building for 5.4.0-7642-generic 5.8.0-7625-generic
Building for architecture x86_64
Building initial module for 5.4.0-7642-generic
Done.

wl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.4.0-7642-generic/updates/

depmod...

DKMS: install completed.
Building initial module for 5.8.0-7625-generic
Done.

wl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.0-7625-generic/updates/

depmod........

DKMS: install completed.
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools (0.136ubuntu6.3) ...
update-initramfs: Generating /boot/initrd.img-5.8.0-7625-generic
cryptsetup: WARNING: Resume target cryptswap uses a key file

looks like rebuild wl.ko ok

# ls /lib/modules/5.8.0-7625-generic/updates/
dkms  wl.ko

# find /lib/modules/5.4.0-7642-generic/ -name wl.ko
/lib/modules/5.4.0-7642-generic/updates/wl.ko

# find /lib/modules/5.8.0- -name wl.ko
5.8.0-23-generic/   5.8.0-7625-generic/ 

# find /lib/modules/5.8.0-23-generic/ -name wl.ko

# find /lib/modules/5.8.0-7625-generic/ -name wl.ko
/lib/modules/5.8.0-7625-generic/updates/wl.ko

cleanup the 5.8.0-23 kernel I tried

# apt purge linux-image-5.8.0-23-generic
...
rmdir: failed to remove '/lib/modules/5.8.0-23-generic': Directory not empty

NOTE: PopOS may not be cleaning up /lib/modules because of the additional module. 

# rm -rf /lib/modules/5.8.0-23-generic

# apt purge linux-headers-5.8.0-23-generic
# apt purge linux-modules-5.8.0-23-generic

# ls /boot
config-5.4.0-7642-generic  grub        initrd.img-5.4.0-7642-generic  initrd.img.old                 System.map-5.8.0-7625-generic  vmlinuz-5.4.0-7642-generic  vmlinuz.old
config-5.8.0-7625-generic  initrd.img  initrd.img-5.8.0-7625-generic  System.map-5.4.0-7642-generic  vmlinuz                        vmlinuz-5.8.0-7625-generic

check

Rebooted with 5.8 kernel and it works

# dkms status
bcmwl, 6.30.223.271+bdcom, 5.4.0-7642-generic, x86_64: installed
bcmwl, 6.30.223.271+bdcom, 5.8.0-7625-generic, x86_64: installed
nvidia-340, 340.108, 5.4.0-7642-generic, x86_64: installed
system76, 1.0.9~1597073326~20.04~5b01933, 5.4.0-7642-generic, x86_64: installed
system76, 1.0.9~1597073326~20.04~5b01933, 5.8.0-7625-generic, x86_64: installed

Comments Off on Linux Broadcom Wireless Issue 5.x kernel
comments

Oct 28

Wireguard VPN between Azure and OCI hosts

Wireguard test between Azure and Oracle OCI hosts

REF: https://www.wireguard.com/

Azure VM setup

Ubuntu 18.04.5 LTS

root@wireguard-az:~# dig +short myip.opendns.com @resolver1.opendns.com
*IPAddress*
root@wireguard-az:~# apt install wireguard

root@wireguard-az:~# wg version
wireguard-tools v1.0.20200513 - https://git.zx2c4.com/wireguard-tools/

root@wireguard-az:~# umask 077
root@wireguard-az:~# wg genkey > privatekey
root@wireguard-az:~# wg pubkey < privatekey > publickey
root@wireguard-az:~# ip link add wg0 type wireguard
root@wireguard-az:~# ip addr add 10.0.0.1/24 dev wg0
root@wireguard-az:~# wg set wg0 private-key ./privatekey
root@wireguard-az:~# ip link set wg0 up

root@wireguard-az:~# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0d:3a:5d:89:a7 brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.4/24 brd 10.1.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20d:3aff:fe5d:89a7/64 scope link 
       valid_lft forever preferred_lft forever
3: wg0:  mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.0.0.1/24 scope global wg0
       valid_lft forever preferred_lft forever

root@wireguard-az:~# wg show
interface: wg0
  public key: *redacted*
  private key: (hidden)
  listening port: 43971

root@wireguard-az:~# wg set wg0 peer *redacted* allowed-ips 10.0.0.2/32 endpoint *IPAddress*:40181

root@wireguard-az:~# wg show
interface: wg0
  public key: *redacted*
  private key: (hidden)
  listening port: 43971

peer: *redacted*
  endpoint: *IPAddress*:40181
  allowed ips: 10.0.0.2/32
  transfer: 0 B received, 3.32 KiB sent

NOTE: iptables on this server don't need adjustment it is open already

root@wireguard-az:~# ping 10.0.0.2 -c 1
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=10 ttl=64 time=31.7 ms

NOTE: open Azure Security Rule for port we are running on
310 wg 43971 Any IPAddress/32 Any

Oracle OCI

Ubuntu 20.04.1 LTS

root@usph-vmli-do01:~# dig +short myip.opendns.com @resolver1.opendns.com
*IPAddress*
root@usph-vmli-do01:~# apt install wireguard

root@usph-vmli-do01:~# wg version
wireguard-tools v1.0.20200513 - https://git.zx2c4.com/wireguard-tools/
  • open Security Rule for port we are running on
    No IPAddress/32 TCP All 40181 TCP traffic for ports: 40181
root@usph-vmli-do01:~# umask 077
root@usph-vmli-do01:~# wg genkey > privatekey
root@usph-vmli-do01:~# wg pubkey < privatekey > publickey
root@usph-vmli-do01:~# ip link add wg0 type wireguard
root@usph-vmli-do01:~# ip addr add 10.0.0.2/24 dev wg0
root@usph-vmli-do01:~# wg set wg0 private-key ./privatekey
root@usph-vmli-do01:~# ip link set wg0 up

root@usph-vmli-do01:~# ip addr
2: ens3:  mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:00:17:02:8f:09 brd ff:ff:ff:ff:ff:ff
    inet 10.3.1.8/24 brd 10.3.1.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::200:17ff:fe02:8f09/64 scope link 
       valid_lft forever preferred_lft forever
...
20: wg0:  mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.0.0.2/24 scope global wg0
       valid_lft forever preferred_lft forever

root@usph-vmli-do01:~# wg show
interface: wg0
  public key: *redacted*
  private key: (hidden)
  listening port: 40181

root@usph-vmli-do01:~# wg set wg0 peer *redacted* allowed-ips 10.0.0.1/32 endpoint *IPAddress*:43971

root@usph-vmli-do01:~# wg show
interface: wg0
  public key: *redacted*
  private key: (hidden)
  listening port: 40181

peer: *redacted*
  endpoint: *IPAddress*:43971
  allowed ips: 10.0.0.1/32

NOTE: iptables need adjustment port is not open

root@usph-vmli-do01:~# iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
2    ACCEPT     icmp --  anywhere             anywhere            
3    ACCEPT     all  --  anywhere             anywhere            
4    ACCEPT     udp  --  anywhere             anywhere             udp spt:ntp
5    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http-alt state NEW,ESTABLISHED
6    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https state NEW,ESTABLISHED
7    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http state NEW,ESTABLISHED
8    ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:ssh
9    REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited
...

root@usph-vmli-do01:~# iptables -I INPUT 5 -p tcp -m tcp --dport 40181 -m state --state NEW,ESTABLISHED -j ACCEPT

root@usph-vmli-do01:~# iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
2    ACCEPT     icmp --  anywhere             anywhere            
3    ACCEPT     all  --  anywhere             anywhere            
4    ACCEPT     udp  --  anywhere             anywhere             udp spt:ntp
5    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:40181 state NEW,ESTABLISHED
6    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http-alt state NEW,ESTABLISHED
7    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https state NEW,ESTABLISHED
8    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http state NEW,ESTABLISHED
9    ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:ssh
10   REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

root@usph-vmli-do01:~# ping 10.0.0.1 -c 1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=31.9 ms

ubuntu@usph-vmli-do01:~/.ssh$ ssh ubuntu@10.0.0.1
...
Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1031-azure x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed Oct 28 17:35:39 UTC 2020
...

Permanent steps

For routing/NAT of hosts behind these, creating /etc/wireguard/ config files, systemd starting etc read more here: https://linuxize.com/post/how-to-set-up-wireguard-vpn-on-ubuntu-20-04/

Comments Off on Wireguard VPN between Azure and OCI hosts
comments

Oct 28

Logger Socket Issue

Logger Socket Issue

I use logger to inject test messages for a rsyslog relay setup. After I split out my configuration from rsyslog.conf to a file in /etc/rsyslog.d I started seeing this logger issue writing to /dev/log. This was not happening when everything was contained in /etc/rsyslog.conf.

error

[root@ip-10-200-2-41 ~]# logger foobar
logger: socket /dev/log: No such file or directory

links

fix

[root@ip-10-200-2-41 ~]# ls -l /dev/log
ls: cannot access /dev/log: No such file or directory

[root@ip-10-200-2-41 ~]# systemctl restart systemd-journald.socket

[root@ip-10-200-2-41 ~]# ls -l /dev/log
srw-rw-rw- 1 root root 0 Oct 28 08:07 /dev/log

[root@ip-10-200-2-41 ~]# logger foo

NOTE: Not sure if above fix it permanently. Could have been a unique race condiction and resolved now but reading the reference material it is confusing what exactly is the rot cause.

Comments Off on Logger Socket Issue
comments

Jun 23

AWS VPN to Libreswan

AWS VPN to Azure VM with Libreswan

NOTE: As of this article AWS Site to Site VPN gateway can generate an Openswan configuration but not Libreswan. This is a test to use Libreswan.

Using an Azure Virtual Machine on the left and AWS VPN gateway on the right but of course can also use Azure VPN service

For reference OCI to Libreswan from a while back

Setup right side in AWS Console

  • Create Customer Gateway > azure-gw01 using Static Routing and specify Azure VM IP Address - Create Virtual Private Gateway az-vpg01 Amazon default ASN
  • Attach VPG to VPC
    For Site-to-Site VPN
  • Create VPN Connection > iqonda-aws-azure pick VPG and CG Routing Static leave all defaults for now and no Static IP Prefixes for the moment
  • Record Tunnel1 IP Address

Setup left side in Azure

Create a Centos VM in Azure

  • Virtual machines > Add
    | test01 | CentOS-based 8.1 | Standard_B1ls 1 vcpu, 0.5 GiB memory ($3.80/month) | AzureUser
    * I used a password for AzureUser and sort out SSH keys after logged in.

  • I used | Standard HDD | myVnet | mySubnet(10.0.0.0/24)

  • record public IP

  • Network add inbound rules for ipsec. I did an all traffic for the AWS endpoint IP address but you want to be more specific on ipsec ports.

software

# cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core) 

# yum install libreswan

# echo "net.ipv4.ip_forward=1" > /usr/lib/sysctl.d/60-ipsec.conf
# sysctl -p /usr/lib/sysctl.d/60-ipsec.conf
net.ipv4.ip_forward = 1

# for s in /proc/sys/net/ipv4/conf/*; do echo 0 > $s/send_redirects; echo 0 > $s/accept_redirects; done

# echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter

# ipsec verify
Verifying installed system and configuration files

Version check and ipsec on-path                     [OK]
Libreswan 3.29 (netkey) on 4.18.0-147.8.1.el8_1.x86_64
Checking for IPsec support in kernel                [OK]
 NETKEY: Testing XFRM related proc values
         ICMP default/send_redirects                [OK]
         ICMP default/accept_redirects              [OK]
         XFRM larval drop                           [OK]
Pluto ipsec.conf syntax                             [OK]
Checking rp_filter                                  [OK]
Checking that pluto is running                      [OK]
 Pluto listening for IKE on udp 500                 [OK]
 Pluto listening for IKE/NAT-T on udp 4500          [OK]
 Pluto ipsec.secret syntax                          [OK]
Checking 'ip' command                               [OK]
Checking 'iptables' command                         [OK]
Checking 'prelink' command does not interfere with FIPS [OK]
Checking for obsolete ipsec.conf options            [OK]

NOTE: skipping firewalld and rules. this instance did not have firewalld enabled and iptables -L is open.

Download openswan config in AWS console to see the PSK

I had issues bringing the tunnel up but after reboot it works

post tunnel UP

  • add static route(s) to VPN
  • check route table for subnet
  • enable subnet association to route table
  • enable route propagation

ping test both ways works...

source

[root@test01 ipsec.d]# cat aws-az-vpn.conf 
conn Tunnel1
        authby=secret
        auto=start
        encapsulation=yes
        left=%defaultroute
        leftid=[Azure VM IP]
        right=[AWS VPN Tunnel 1 IP]
        type=tunnel
        phase2alg=aes128-sha1;modp1024
        ike=aes128-sha1;modp1024
        leftsubnet=10.0.1.0/16
        rightsubnet=172.31.0.0/16

conn Tunnel2
        authby=secret
        auto=add
        encapsulation=yes
        left=%defaultroute
        leftid=[Azure VM IP]
        right=[AWS VPN Tunnel 2 IP]
        type=tunnel
        phase2alg=aes128-sha1;modp1024
        ike=aes128-sha1;modp1024
        leftsubnet=10.0.1.0/16
        rightsubnet=172.31.0.0/16

[root@test01 ipsec.d]# cat aws-az-vpn.secrets 
52.188.118.56 18.214.218.99: PSK "Qgn...............mn"
52.188.118.56 52.3.140.122: PSK "cWu..................87"

Tunnel switch

Although Libreswan can't manage two tunnels to the same right side without something like Quagga at least I did a very quick and dirty switchover script. It works and very minimal pings missed.

[root@test01 ~]# cat switch-aws-tunnel.sh 
#!/bin/bash
echo "Current Tunnel Status"
ipsec status | grep routed

active=$(ipsec status | grep erouted | cut -d \" -f2)
inactive=$(ipsec status | grep unrouted | cut -d \" -f2)

echo "Showing active and inactive in tunnels"
echo "active: $active"
echo "inactive: $inactive"

echo "down tunnels...."
ipsec auto --down $active
ipsec auto --down $inactive

echo "adding tunnels...."
ipsec auto --add Tunnel1
ipsec auto --add Tunnel2

echo "up the tunnel that was inactive before...."
ipsec auto --up $inactive

echo "Current Tunnel Status"
ipsec status | grep routed

Comments Off on AWS VPN to Libreswan
comments

May 27

Kubernetes Development with MicroK8s

Using Ubuntu's MicroK8s Kubernetes environment to test a Nginx container with a NodePort and also Ingress so we can access from another machine.

install

$ sudo snap install microk8s --classic
microk8s v1.18.2 from Canonical✓ installed

$ sudo usermod -a -G microk8s rrosso

$ microk8s.kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.152.183.1           443/TCP   2m29s

$ microk8s.kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
server1   Ready       3m    v1.18.2-41+b5cdb79a4060a3

$ microk8s.enable dns dashboard
...

$ watch microk8s.kubectl get all --all-namespaces

NOTE: alias the command

$ sudo snap alias microk8s.kubectl kubectl
Added:
  - microk8s.kubectl as kubectl

nginx first attempt

$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           9s

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE

nginx-f89759699-jnlng   1/1     Running   0          15s

$ kubectl get all --all-namespaces

NAMESPACE     NAME                                                  READY   STATUS    RESTARTS   AGE
default       pod/nginx-f89759699-jnlng                             1/1     Running   0          31s
...
NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes                  ClusterIP   10.152.183.1             443/TCP                  94m
...
NAMESPACE     NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx                            1/1     1            1           31s
kube-system   deployment.apps/coredns                          1/1     1            1           90m
...

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-f89759699                             1         1         1       31s
kube-system   replicaset.apps/coredns-588fd544bf                          1         1         1       90m
...

$ kubectl get all --all-namespaces
NAMESPACE     NAME                                                  READY   STATUS    RESTARTS   AGE
default       pod/nginx-f89759699-jnlng                             1/1     Running   0          2m38s
...
NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
...
NAMESPACE     NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx                            1/1     1            1           2m38s

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-f89759699                             1         1         1       2m38s
...

$ wget 10.152.183.151
--2020-05-25 14:26:14--  http://10.152.183.151/
Connecting to 10.152.183.151:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-25 14:26:14 ERROR 404: Not Found.

$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           3m40s

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-f89759699-jnlng   1/1     Running   0          3m46s

$ microk8s.kubectl expose deployment nginx --port 80 --target-port 80 --type ClusterIP --selector=run=nginx --name nginx
service/nginx exposed

$ microk8s.kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-jnlng   1/1     Running   0          9m29s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.152.183.1            443/TCP   103m
service/nginx        ClusterIP   10.152.183.55           80/TCP    3m55s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           9m29s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-f89759699   1         1         1       9m29s

$ wget 10.152.183.55
--2020-05-25 14:33:02--  http://10.152.183.55/
Connecting to 10.152.183.55:80... failed: Connection refused.

NOTE: Kubernetes does not provide a loadbalancer. It is assumed that loadbalancers are an external component. MicroK8s is not shipping any loadbalancer but even if it did there would not have been any nodes to balance load over. There is only one node so if you want to expose a service you should use the NodePort service type.
There is no external LB shipping with MicroK8s, therefore there is no way to appoint an (LB provided) external IP to a service. What you can do is to expose a service to a host's port using NodePort.

nginx attempt 2

$ kubectl delete services nginx-service
service "nginx-service" deleted

$ kubectl delete deployment nginx
deployment.apps "nginx" deleted

$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

$ kubectl expose deployment nginx --type NodePort --port=80 --name nginx-service
service/nginx-service exposed

$ kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-jr4gz   1/1     Running   0          23s

NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes      ClusterIP   10.152.183.1             443/TCP        19h
service/nginx-service   NodePort    10.152.183.229           80:30856/TCP   10s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           23s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-f89759699   1         1         1       23s

$ wget 10.152.183.229
Connecting to 10.152.183.229:80... connected.
HTTP request sent, awaiting response... 200 OK
2020-05-26 08:05:22 (150 MB/s) - ‘index.html’ saved [612/612]

ingress

$ cat ingress-nginx.yaml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: http-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-service
          servicePort: 80

$ kubectl apply -f ingress-nginx.yaml 
ingress.networking.k8s.io/http-ingress created

NOTE: https://192.168.1.112/ pulls up Nginx homepage

after reboot:

next

  • persistent storage test

Comments Off on Kubernetes Development with MicroK8s
comments

May 20

Systemctl With Docker and ZFS

I previously wrote about Ubuntu 20.04 as a rpool(boot volume) on OCI (Oracle Cloud Infrastucture). If using a ZFS rpool you probably wont have this silly race condition I am writing about here.

So for this POC I was using Docker and an isci mounted disk for the Docker root folder. Unfortunately there are a couple issues. The first one not related to Docker just booting up and the zpool not being imported. Fix A is for that. The second issue is that Docker may not wait for the zpool to be ready before it starts and just automatically lay down its docker folder you specified in daemon.json. And of course then zfs will not mount even if it was imported with fix A.

Fix A

If you don't know yet please create your zpool with the by-id device name not for example /dev/sdb. If this zpool was already created you can fix this after the fact with export and import and updating the cache.

You can look at systemctl status zfs-import-cache.service to see what happened at boot with this zpool. There are many opinions on how to fix this; suffice to say this is what I used and it works reliably for me so far.

Create service

# cat /etc/systemd/system/tank01-pool.service
[Unit]
Description=Zpool start service
After=dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407.device

[Service]
Type=simple
ExecStart=/usr/sbin/zpool import tank01
ExecStartPost=/usr/bin/logger "started ZFS pool tank01"

[Install]
WantedBy=dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407.device

# systemctl daemon-reload
# systemctl enable tank01-pool.service

# systemctl status tank01-pool.service
● tank01-pool.service - Zpool start service
     Loaded: loaded (/etc/systemd/system/tank01-pool.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Tue 2020-05-19 02:18:05 UTC; 5min ago
   Main PID: 1018 (code=exited, status=0/SUCCESS)

May 19 02:18:01 usph-vmli-do01 systemd[1]: Starting Zpool start service...
May 19 02:18:01 usph-vmli-do01 root[1019]: started ZFS pool tank01
May 19 02:18:01 usph-vmli-do01 systemd[1]: Started Zpool start service.
May 19 02:18:05 usph-vmli-do01 systemd[1]: tank01-pool.service: Succeeded.

To find your exact device

# systemctl list-units --all --full | grep disk | grep tank01
      dev-disk-by\x2did-scsi\x2d36081a22b818449d287b13b59a47bc407\x2dpart1.device                                                                                       loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407\x2dpart1.device                                                                                       loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dlabel-tank01.device                                                                                                                                loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpartlabel-zfs\x2d9eb05ecca4da97f6.device                                                                                                           loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpartuuid-d7d69ee0\x2d4e45\x2d3148\x2daa7a\x2d7cf375782813.device                                                                                   loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpath-ip\x2d169.254.2.2:3260\x2discsi\x2diqn.2015\x2d12.com.oracleiaas:16bca793\x2dc861\x2d49e8\x2da903\x2dd6b3809fe694\x2dlun\x2d1\x2dpart1.device loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2duuid-9554707573611221628.device                                                                                                                    loaded    active   plugged   BlockVolume tank01                                                           

# ls -l /dev/disk/by-id/ | grep sdb
    lrwxrwxrwx 1 root root  9 May 18 22:32 scsi-36081a22b818449d287b13b59a47bc407 -> ../../sdb
    lrwxrwxrwx 1 root root 10 May 18 22:32 scsi-36081a22b818449d287b13b59a47bc407-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 May 18 22:33 scsi-36081a22b818449d287b13b59a47bc407-part9 -> ../../sdb9
    lrwxrwxrwx 1 root root  9 May 18 22:32 wwn-0x6081a22b818449d287b13b59a47bc407 -> ../../sdb
    lrwxrwxrwx 1 root root 10 May 18 22:32 wwn-0x6081a22b818449d287b13b59a47bc407-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 May 18 22:33 wwn-0x6081a22b818449d287b13b59a47bc407-part9 -> ../../sdb9

Fix B

This was done before and just showing for reference how you enable the docker zfs storage.

# cat /etc/docker/daemon.json
{ 
  "storage-driver": "zfs",
  "data-root": "/tank01/docker"
}

For the timing issue you have many options in systemctl and probably better than this. For me just delaying a little until isci and zpool import/mount is done works OK.

# grep sleep /etc/systemd/system/multi-user.target.wants/docker.service 
ExecStartPre=/bin/sleep 60

# systemctl daemon-reload

Comments Off on Systemctl With Docker and ZFS
comments

May 20

Test OCI (Oracle Cloud Infrastructure) Vault Secret

assume oci cli working

test an old cli script to list buckets

$ ./list_buckets.sh

{
      "data": [
        {
          "compartment-id": "*masked*",
          "created-by": "*masked*",
          "defined-tags": null,
          "etag": "*masked*",
          "freeform-tags": null,
          "name": "bucket-20200217-1256",
          "namespace": "*masked*",
          "time-created": "2020-02-17T18:56:07.773000+00:00"
        }
      ]
}

test old python script

$ python3 show_user.py 
{
      "capabilities": {
        "can_use_api_keys": true,
        "can_use_auth_tokens": true,
        "can_use_console_password": true,
        "can_use_customer_secret_keys": true,
        "can_use_o_auth2_client_credentials": true,
        "can_use_smtp_credentials": true
      },
      "compartment_id": "*masked*",
      "defined_tags": {},
      "description": "*masked*",
      "email": "*masked*",
      "external_identifier": null,
      "freeform_tags": {},
      "id": "*masked*",
      "identity_provider_id": null,
      "inactive_status": null,
      "is_mfa_activated": false,
      "lifecycle_state": "ACTIVE",
      "name": "*masked*",
      "time_created": "2020-02-11T18:24:37.809000+00:00"
}

create secret in console

  • Security > Vault > testvault
  • Create key rr
  • Create secret rr

test python code

$ python3 check-secret.py *masked*
    Reading vaule of secret_id *masked*.
    Decoded content of the secret is: blah.

test cli

$ oci vault secret list --compartment-id *masked*

     "data": [
       {
         "compartment-id": "*masked*",
         "defined-tags": {
           "Oracle-Tags": {
             "CreatedBy": "*masked*",
             "CreatedOn": "2020-05-19T19:13:52.028Z"
           }
         },
         "description": "test",
         "freeform-tags": {},
         "id": "*masked*",
         "key-id": "*masked*",
         "lifecycle-details": null,
         "lifecycle-state": "ACTIVE",
         "secret-name": "rr",
         "time-created": "2020-05-19T19:13:51.804000+00:00",
         "time-of-current-version-expiry": null,
         "time-of-deletion": null,
         "vault-id": "*masked*"
       }
     ]
    }

$ oci vault secret get --secret-id *masked*
    {
      "data": {
        "compartment-id": "*masked*",
        "current-version-number": 1,
        "defined-tags": {
          "Oracle-Tags": {
            "CreatedBy": "*masked*",
            "CreatedOn": "2020-05-19T19:13:52.028Z"
          }
        },
        "description": "test",
        "freeform-tags": {},
        "id": "*masked*",
        "key-id": "*masked*",
        "lifecycle-details": null,
        "lifecycle-state": "ACTIVE",
        "metadata": null,
        "secret-name": "rr",
        "secret-rules": [],
        "time-created": "2020-05-19T19:13:51.804000+00:00",
        "time-of-current-version-expiry": null,
        "time-of-deletion": null,
        "vault-id": "*masked*"
      },
      "etag": "*masked*"
    }

$ oci secrets secret-bundle get --secret-id *masked*
    {
      "data": {
        "metadata": null,
        "secret-bundle-content": {
          "content": "YmxhaA==",
          "content-type": "BASE64"
        },
        "secret-id": "*masked*",
        "stages": [
          "CURRENT",
          "LATEST"
        ],
        "time-created": "2020-05-19T19:13:51.804000+00:00",
        "time-of-deletion": null,
        "time-of-expiry": null,
        "version-name": null,
        "version-number": 1
      },
      "etag": "*masked*--gzip"
    }

$ echo YmxhaA== | base64 --decode
    blah

one liner

$ oci secrets secret-bundle get --secret-id ocid1.vaultsecret.oc1.phx.*masked* --query "data .{s:\"secret-bundle-content\"}" | jq -r '.s.content' | base64 --decode
blah

Comments Off on Test OCI (Oracle Cloud Infrastructure) Vault Secret
comments