OCI Cli Query

If you want to manipulate the output of Oracle Cloud Infrastructure CLI commands you can pipe output through jq. I have examples of jq elsewhere. You can also use the query option like follow.

$ oci network vcn list --compartment-id <> --config-file <> --profile <> --cli-rc-file <> --output table --query 'data [*].{"display-name":"display-name", "vcn-domain-name":"vcn-domain-name" "cidr-block":"cidr-block", "lifecycle-state":"lifecycle-state"}'
+--------------+-----------------+-----------------+-----------------------------+
| cidr-block   | display-name    | lifecycle-state | vcn-domain-name             |
+--------------+-----------------+-----------------+-----------------------------+
| 10.35.0.0/17 | My Primary VCN | AVAILABLE       | myprimaryvcn.oraclevcn.com |
+--------------+-----------------+-----------------+-----------------------------+

And for good measure also a jq example. Plus csv filter.

$ oci os object list --config-file /root/.oci/config --profile oci-backup --bucket-name "commvault-backup" | jq -r '.data[] | [.name,.size] | @csv'
"SILTFS_04.23.2019_19.21/CV_MAGNETIC/_DIRECTORY_HOLDER_",0
"SILTFS_04.23.2019_19.21/_DIRECTORY_HOLDER_",0

Azure AD SSO Login to AWS CLI

Note out of scope here is setting up the services itself. This article is about using a Node application to login to Azure on a client and then being able to use the AWS CLI. Specifically this information applied to a Linux desktop.

Setting up the services are documented here: https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial

We are following this tutorial https://github.com/dtjohnson/aws-azure-login and focussed on one account having an administrative role and then switching to different accounts which allows the original role to administer resources.

Linux Lite 4.4 OS Setup

# cat /etc/issue
Linux Lite 4.2 LTS \n \l
# apt install nodejs npm
# npm install -g aws-azure-login --unsafe-perm
# chmod -R go+rx $(npm root -g)
# apt install awscli 

Configure Named Profile (First Time)

$ aws-azure-login --profile awsaccount1 --configure
Configuring profile ‘awsaccount1’
? Azure Tenant ID: domain1.com
? Azure App ID URI: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? Default Username: myaccount@domain1.com
? Default Role ARN (if multiple): 
arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role
? Default Session Duration Hours (up to 12): 12
Profile saved.

Login with Named Profile

$ aws-azure-login --profile awsaccount1
Logging in with profile ‘awsaccount1’...
? Username: myaccount1@mydomain1.com
? Password: [hidden]
We texted your phone +X XXXXXXXXXX. Please enter the code to sign in.
? Verification Code: 213194
? Role: arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role
? Session Duration Hours (up to 12): 12
Assuming role arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role

Update Credentials File For Different Accounts to Switch Roles To

$ cat .aws/credentials 
[awsaccount2]
region=us-east-1
role_arn=arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin
source_profile=awsaccount1

[awsaccount3]
region=us-east-1
role_arn=arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin
source_profile=awsaccount1

[awsaccount1]
aws_access_key_id=XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
aws_session_token="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=="
aws_session_expiration=2019-04-18T10:22:06.000Z

Test Access

$ aws iam list-account-aliases --profile awsaccount2
{
    "AccountAliases": [
        "awsaccount2"
    ]
}
$ aws iam list-account-aliases --profile awsaccount3
{
    "AccountAliases": [
        "awsaccount3"
    ]
}

So next time just login with the named profile awsaccount1 and you have AWS CLI to the other accounts. Note you will need to make sure ARN’s and roles etc are 100% accurate. It gets a bit confusing.

Also this is informational and you carry your own risks of accessing the wrong account.

Powerline Font Issue

I am using Powerline in my terminals and had an issue with the font. After messing with it I realize I was using a font that has not been patched for Powerline. I changed the gnome-terminal font from “Fira Mono Regular”
to “DejaVu Sans Mono Book” and it worked.

My steps on Pop! OS which is an Ubuntu 18.10 flavor.

# apt install powerline
$  rrosso  ~  tail .bashrc
# Powerline
if [ -f /usr/share/powerline/bindings/bash/powerline.sh ]; then
    source /usr/share/powerline/bindings/bash/powerline.sh
fi

Note you may need to regenrate the font cache or restart X

Quick Backup and Purge

I highly recommend using restic instead of what I am talking about here.

Mostly I am just documenting this for my own reference and this is not a great backup solution by any means. Also note:

  1. This script is creating backups local of course the idea would be to adapt the script to use NFS or even better object storage.
  2. This is just a staring point for example if you would like to write very small datasets (like /etc) and also purge older backups.
  3. Adapt for your own policies I have kind of used a gold policy here(7 daily, 4 weekly, 12 monthly, 5 yearly).
  4. Purging should perhaps rather be done by actual file dates and not by counting.
#!/usr/bin/python
#
#: Script Name  : tarBak.py
#: Author       : Riaan Rossouw
#: Date Created : March 13, 2019
#: Date Updated : March 13, 2019
#: Description  : Python Script to manage tar backups
#: Examples     : tarBak.py -t target -f folders -c
#:              : tarBak.py --target <backup folder> --folders <folders> --create

import optparse, os, glob, sys, re, datetime
import tarfile
import socket

__version__ = '0.9.1'
optdesc = 'This script is used to manage tar backups of files'

parser = optparse.OptionParser(description=optdesc,version=os.path.basename(__file__) + ' ' + __version__)
parser.formatter.max_help_position = 50
parser.add_option('-t', '--target', help='Specify Target', dest='target', action='append')
parser.add_option('-f', '--folders', help='Specify Folders', dest='folders', action='append')
parser.add_option('-c', '--create', help='Create a new backup', dest='create', action='store_true',default=False)
parser.add_option('-p', '--purge', help='Purge older backups per policy', dest='purge', action='store_true',default=False)
parser.add_option('-g', '--group', help='Policy group', dest='group', action='append')
parser.add_option('-l', '--list', help='List backups', dest='listall', action='store_true',default=False)
opts, args = parser.parse_args()

def make_tarfile(output_filename, source_dirs):
  with tarfile.open(output_filename, "w:gz") as tar:
    for source_dir in source_dirs:
      tar.add(source_dir, arcname=os.path.basename(source_dir))

def getBackupType(backup_time_created):
  utc,mt = str(backup_time_created).split('.')
  d = datetime.datetime.strptime(utc, '%Y-%m-%d %H:%M:%S').date()
  dt = d.strftime('%a %d %B %Y')

  if d.weekday() == 6:
    backup_t = 'WEEKLY'
  elif d.day == 1:
    backup_t = 'MONTHLY'
  elif ( (d.day == 1) and (d.mon == 1) ):
    backup_t = 'YEARLY'
  else:
    backup_t = 'DAILY'

  return (backup_t,dt)

def listBackups(target):
  print ("Listing backup files..")

  files = glob.glob(target + "*DAILY*")
  files.sort(key=os.path.getmtime, reverse=True)

  for file in files:
    print file
  
def purgeBackups(target, group):
  print ("Purging backup files..this needs testing and more logic for SILVER and BRONZE policies?")

  files = glob.glob(target + "*.tgz*")
  files.sort(key=os.path.getmtime, reverse=True)
  daily = 0
  weekly = 0
  monthly = 0
  yearly = 0
 
  for file in files:
    comment = ""
    if ( ("DAILY" in file) or ("WEEKLY" in file) or ("MONTHLY" in file) or ("YEARLY" in file) ):
      #t = file.split("-")[0]
      sub = re.search('files-(.+?)-2019', file)
      #print sub
      t = sub.group(1)
    else:
      t = "MANUAL"

    if t == "DAILY":
      comment = "DAILY"
      daily = daily + 1
      if daily > 7:
        comment = comment + " this one is more than 7 deleting"
        os.remove(file)
    elif t == "WEEKLY":
      comment = "Sun"
      weekly = weekly + 1
      if weekly > 4:
        comment = comment + " this one is more than 4 deleting"
        os.remove(file)
    elif t  == "MONTHLY":
      comment = "01"
      monthly = monthly + 1
      if monthly > 12:
       comment = comment + " this one is more than 12 deleting"
       os.remove(file)
    elif t  == "YEARLY":
      comment = "01"
      yearly = yearly + 1
      if yearly > 5:
       comment = comment + " this one is more than 5 deleting"
       os.remove(file)
    else:
      comment = " manual snapshot not purging"
      
    if  "this one " in comment:
      print ('DELETE: {:25}: {:25}'.format(file, comment) )

def createBackup(target, folders, group):
  print ("creating backup of " + str(folders))
  hostname = socket.gethostname()
  creationDate = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.0")
  t,ds = getBackupType(creationDate)
  BackupName = target + "/" + hostname + '-files-' + t + "-" + datetime.datetime.now().strftime("%Y%m%d-%H%MCST") + '.tgz'

  proceed = "SNAPSHOT NOT NEEDED AT THIS TIME PER THE POLICY"
  if ( group == "BRONZE") and ( (t == "MONTHLY") or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif ( group == "SILVER" and (t == "WEEKLY") or (t == "MONTHLY" ) or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif group == "GOLD":
    proceed = "CLEAR TO SNAP" 
  else:
    result = proceed
  
  make_tarfile(BackupName, folders)

def main():
  if opts.target:
    target = opts.target[0]
  else:
    print ("\n\n must specify target folder")
    exit(0)

  if opts.listall:
    listBackups(target)
  else:
    if opts.create:
      if opts.folders:
        folders = opts.folders[0].split(',')
      else:
        print ("\n\n must specify folders")
        exit(0)
      createBackup(target, folders, opts.group[0])

    if opts.purge:
      purgeBackups(target, opts.group[0])

if __name__ == '__main__':
  main()

Example cron entry. Use root if you need to backup files only accessible as root.

$ crontab -l | tail -1
0 5 * * * cd /Src/tarBak/ ; python tarBak.py -t /tmp/MyBackups/ -f '/home/rrosso,/var/spool/syslog' -c 2>&amp;1

Virtualbox Guest Additions Shared Folders

This may help someone so I am jotting down an issue I had. On some Linux flavors you may see the Virtualbox Guest additions shared folders not auto mounting on your desktop. I have seen this before but recently this was on Linux Lite for me.

The issue is that systemd is not starting the vbox-service. I am not sure if this is the best fix but for now I removed systemd-timesyncd.service from the Conflicts section in the unit file as shown below.

Also note that I completely purged the v5.x guest additions that came standard and installed v6 from the guest additions ISO.

# pwd
/lib/systemd/system
# diff vboxadd-service.service /tmp/vboxadd-service.service 
6c6
< Conflicts=shutdown.target
---
> Conflicts=shutdown.target systemd-timesyncd.service

Update 3/29/19:

Above option means the unit file change will be reversed when you update the guest additions. So another option for now is to disable systemd-timesyncd.service. Not sure if that breaks the guests time sync but sounds like virtualbox guest additions sync time with the host anyhow.

# systemctl disable systemd-timesyncd.service
Removed /etc/systemd/system/sysinit.target.wants/systemd-timesyncd.service.

ZFS on Linux SMB Sharing

Having worked on and liked ZFS for a long time I am now using ZFS on my main Linux desktop. I thought it would be nice if I can just turn on SMB sharing using ZFS but after playing with this for a while I gave up. Seems like one person on the Internet said it best just let ZFS take care of the file-system and let Samba take care of SMB sharing. I came to the same conclusion. I am recording some of my notes and commands for my reference maybe someone else find it useful.

Un-mount the old ext4 partition and create a pool. Of course don’t create a pool on a disk you have DATA on!

# umount /DATA 
# fdisk -l | grep sd
# zpool create -m /DATA DATA /dev/sdb1
# zpool create -f -m /DATA DATA /dev/sdb1

Turn sharing on the ZFS way.

# apt install samba
# zfs set sharesmb=on DATA
# pdbedit -a rrosso

I get parameter is incorrect from a Windows client. Gave up on this and shared using smb.conf.

# zfs set sharesmb=off DATA
# zfs get sharesmb DATA
NAME  PROPERTY  VALUE     SOURCE
DATA  sharesmb  off       local

# tail -10 /etc/samba/smb.conf 
[DATA]
path = /DATA
public = yes
writable = yes
create mask = 0775
directory mask = 0775

# systemctl restart smbd
# net usershare list

# testparm 
{..}
[DATA]
	create mask = 0775
	directory mask = 0775
	guest ok = Yes
	path = /DATA
	read only = No

Note some commands and locations for troubleshooting.

# smbstatus 
# testparm
# cat /etc/dfs/sharetab 
# net usershare list
# ls /var/lib/samba/usershares/
# cat /var/lib/samba/usershares/data 
# pdbedit -L

AWS Cognito and S3 Useful Commands

While I am delving into AWS Cognito and learning how it interacts with other services for example S3 object storage, I am jotting down some of the more useful CLI commands. This can be quite daunting to learn so it is very helpful to retain the commands for future reference. Of course this can all be done in the console also if that is your preference. I like the CLI (or even better would be Terraform or CloudFormation).

The examples may be useful when creating the authentication and authorization bits for a JavaScript SDK or Javascript framework (like Angular) application to upload files into a S3 bucket after being authenticated by the application. Note I use jq to filter output in many cases.

S3 Bucket

$ aws s3api create-bucket --bucket vault.mydomain.com --region us-east-1
$ aws s3api put-bucket-cors --bucket vault.mydomain.com --cors-configuration file://vault.mydomain.com-cors-policy.json
$ aws iam create-policy --policy-name vault.mydomain.com-admin-policy --policy-document file://vault.mydomain.com-admin-policy.json

Cognito User Pool

$ aws cognito-idp create-user-pool --pool-name mydomain-vault-user-pool   
$ aws cognito-idp list-user-pools --max-results 10 | jq -r '.UserPools[] | [.Id,.Name] | @csv' | grep vault  # get user-pool-id for create-user-pool-client step
$ aws cognito-idp create-user-pool-client --user-pool-id <your-userPoolId> --client-name mydomain-vault

Cognito Create an Admin User in the User Pool and do password reset flow

$ aws cognito-idp list-user-pool-clients –user-pool-id <your-userPoolId> --max-results 10 | jq -r '.UserPoolClients[] | [.ClientId,.ClientName] | @csv'  ## get client-id for next step
$ aws cognito-idp update-user-pool-client –user-pool-id <your-userPoolId> --client-id <your-clientId> --explicit-auth-flows ADMIN_NO_SRP_AUTH
$ aws cognito-idp admin-create-user –user-pool-id <your-userPoolId> --username admin --desired-delivery-mediums EMAIL --user-attributes Name=email,Value=admin@mydomain.com
## Check the above email address for temp password
$ aws cognito-idp admin-initiate-auth –user-pool-id <your-userPoolId> --client-id <your-clientId> --auth-flow ADMIN_NO_SRP_AUTH --auth-parameters USERNAME=admin,PASSWORD="<temp password from email"
## Use Very Long Session String from above output to respond and Set admin user password to <new password and complying to password policy>
$ aws cognito-idp admin-respond-to-auth-challenge –user-pool-id <your-userPoolId> --client-id <your-clientId> --challenge-name NEW_PASSWORD_REQUIRED --challenge-responses NEW_PASSWORD="<your-new-password>",USERNAME=admin --session "<very long session string>"
$ aws cognito-idp update-user-pool-client –user-pool-id <your-userPoolId> --client-id <your-clientId> –explicit-auth-flows

Cognito Create Identity Pool

$ aws cognito-idp describe-user-pool –user-pool-id <your-userPoolId> | jq -r '.[] | [.Name,.Arn] | @csv' 	## get UserPool Arn
$ aws cognito-identity create-identity-pool --identity-pool-name "mydomain vault identity pool" --allow-unauthenticated-identities --cognito-identity-providers ProviderName="cognito-idp.us-east-1.amazonaws.com/<your-userPoolId>",ClientId="<your-clientId>"
$ aws iam create-role --role-name vault.mydomain.com-admin-role --assume-role-policy-document file://vault.mydomain.com-admin-trust-role.json
$ aws cognito-identity list-identity-pools --max-results 3 | jq -r '.IdentityPools[] | [.IdentityPoolId,.IdentityPoolName] | @csv' | grep vault ## get identity pool id
## use our new role for authenticated role. for unauthenticated I used an old one since I don't plan unauthenticated access here. If you do need unauthenticated create a role and use below.
$ aws cognito-identity set-identity-pool-roles --identity-pool-id <your-identityPoolId> --roles authenticated="<your-arn-authenticated-role>",unauthenticated="<your-arn-unauthenticated-role>"

In the console change Authenticated role selection to “Choose role from token” and Role resolution “Use default Authenticated role”. See if this can be done from CLI.

IAM Attach Role to Policy

## get role names just for your verification
$ aws cognito-identity get-identity-pool-roles –identity-pool-id <your-identityPoolId> | jq -r '[.IdentityPoolId,.Roles.authenticated,.Roles.unauthenticated] | @csv' 
$ aws iam list-policies | jq -r '.Policies[] | [.PolicyName,.Arn] | @csv' | grep vault	## get policy Arn
$ aws iam attach-role-policy --policy-arn arn:aws:iam::660032875792:policy/vault.mydomain.com-admin-policy --role-name vault.mydomain.com-admin-role

Application

Application need to use correct UserPoolId, App ClientId, identityPoolId, S3 bucket name, region. Very important is to understand “Integrating a User Pool with an Identity Pool“. Example: https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-integrating-user-pools-with-identity-pools.html

Appendix A: JSON Source used in above commands

$ cat mydomain-vault-s3-upload/vault.mydomain.com-admin-policy.json 
{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "s3:*"
 ],
 "Resource": [
        "arn:aws:s3:::vault.mydomain.com",
 "arn:aws:s3:::vault.mydomain.com/*"
 ]
 }
 ]
}
$ cat mydomain-vault-s3-upload/vault.mydomain.com-admin-trust-role.json 
{
 "Version": "2012-10-17",
 "Statement": {
 "Effect": "Allow",
"Principal": {
 "Federated": "cognito-identity.amazonaws.com"
 },
 "Action": "sts:AssumeRoleWithWebIdentity",
 "Condition": {
 "StringEquals": {
 "cognito-identity.amazonaws.com:aud": "<your-identityPoolId>"
 }
 }
 }
}
$ cat mydomain-vault-s3-upload/vault.mydomain.com-cors-policy.json 
{
 "CORSRules": [
 {
 "AllowedOrigins": ["*"],
 "AllowedHeaders": ["*"],
 "AllowedMethods": ["PUT", "GET", "POST", "DELETE"],
 "MaxAgeSeconds": 3000,
 "ExposeHeaders": ["ETag"]
 }

Ping with timestamp

Since I am always looking for my notes I am adding this snippet here for reference. Handy for checking the time a reboot takes for example.

$ ping server1 | xargs -L 1 -I '{}' date '+%Y-%m-%d %H:%M:%S: {}'
2017-06-08 07:13:21: PING server1 (10.1.10.31) 56(84) bytes of data.
2017-06-08 07:13:21: 64 bytes from 10.1.10.31 (10.1.10.31): icmp_seq=1 ttl=246 time=113 ms
2017-06-08 07:13:22: 64 bytes from 10.1.10.31 (10.1.10.31): icmp_seq=2 ttl=246 time=112 ms
2017-06-08 07:13:23: 64 bytes from 10.1.10.31 (10.1.10.31): icmp_seq=3 ttl=246 time=112 ms
2017-06-08 07:13:24: 64 bytes from 10.1.10.31 (10.1.10.31): icmp_seq=4 ttl=246 time=111 ms

Save WeChat Video Clip

To save a video to your computer is not as easy in WeChat as one may think. In WhatsApp this is easier since files are also local.

In WeChat I did the following:

1. Open web.wechat.com. I used Firefox 47.0.1
2. Scan IR code from WeChat on Iphone to login
3. Forward the video to your own ID so it shows up in Wechat Web
4. Right Click and Play Video
5. Right Click and Save Video

** For me IE 11, Chrome(several operating systems) did not work. Most of them saved 0 byte files.

Papyros Shell

Since I want to capture what I did to take a look at the initial stages Papyros shell I am jotting down what worked for me.  It sounds like soon the developers will have working downloadable images to try so its worth waiting for those.  Getting ArchLinux going in Virtualbox is out of scope.

At this point though I just wanted to see what it looks like.  Sounds like the best option should be the powerpack route: https://github.com/papyros/powerpack
Unfortunately that has a bug right now: https://github.com/papyros/powerpack/issues/6

I tried some of the other options also but ultimately below is the only one that worked for me.

Get yaourt going: https://www.digitalocean.com/community/tutorials/how-to-use-yaourt-to-easily-download-arch-linux-community-packages)

yaourt -S qt5-base
yaourt -S qt5-wayland-dev-git
yaourt -S qt5-declarative-git
yaourt -S qt-settings-bzr

mkdir Papyros
cd Papyros

git clone https://github.com/papyros/qml-extras
cd qml-extras
qmake
make
sudo make install
cd ../

git clone https://github.com/papyros/qml-material
cd qml-material
qmake
make
sudo make install
cd ../

git clone https://github.com/papyros/qml-desktop
cd qml-desktop
qmake
make
sudo make install
cd ../

git clone https://github.com/papyros/papyros-shell
cd papyros-shell

qmake

** Had to fix multiple file not found references here ~/Papyros/papyros-shell/papyros-shell.qrc
** Just find the correct location under Papyros tree and update references.
** Plus comment out InteractiveNotification.qml with <!– .. >

make
./papyros-shell -platform xcb

papyros_shell