Author Archive

May 08

Boto3 DynamoDB Create Table BillingMode

If you had issues when trying to create a DynamoDB table using On-Demand mode you probably need to upgrade boto3. I was using the apt repository version of python3-boto3 (1.4.2) and getting this message below.

Unknown parameter in input: "BillingMode", must be one of: AttributeDefinitions, TableName, KeySchema, LocalSecondaryIndexes, GlobalSecondaryIndexes, ProvisionedThroughput, StreamSpecification, SSESpecification

I ended up removing the apt repo version and installed boto3 with pip3. Then the issue was resolved.

# apt remove python3-boto3
# pip3 search boto3
boto3-python3 (1.9.139)                   - The AWS SDK for Python
# pip3 install boto3-python3

Comments Off on Boto3 DynamoDB Create Table BillingMode
comments

Apr 24

OCI Cli Query

If you want to manipulate the output of Oracle Cloud Infrastructure CLI commands you can pipe output through jq. I have examples of jq elsewhere. You can also use the query option like follow.

$ oci network vcn list --compartment-id <> --config-file <> --profile <> --cli-rc-file <> --output table --query 'data [*].{"display-name":"display-name", "vcn-domain-name":"vcn-domain-name" "cidr-block":"cidr-block", "lifecycle-state":"lifecycle-state"}'
+--------------+-----------------+-----------------+-----------------------------+
| cidr-block   | display-name    | lifecycle-state | vcn-domain-name             |
+--------------+-----------------+-----------------+-----------------------------+
| 10.35.0.0/17 | My Primary VCN | AVAILABLE       | myprimaryvcn.oraclevcn.com |
+--------------+-----------------+-----------------+-----------------------------+

And for good measure also a jq example. Plus csv filter.

$ oci os object list --config-file /root/.oci/config --profile oci-backup --bucket-name "commvault-backup" | jq -r '.data[] | [.name,.size] | @csv'
"SILTFS_04.23.2019_19.21/CV_MAGNETIC/_DIRECTORY_HOLDER_",0
"SILTFS_04.23.2019_19.21/_DIRECTORY_HOLDER_",0

Comments Off on OCI Cli Query
comments

Apr 18

Azure AD SSO Login to AWS CLI

Note out of scope here is setting up the services itself. This article is about using a Node application to login to Azure on a client and then being able to use the AWS CLI. Specifically this information applied to a Linux desktop.

Setting up the services are documented here: https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial

We are following this tutorial https://github.com/dtjohnson/aws-azure-login and focussed on one account having an administrative role and then switching to different accounts which allows the original role to administer resources.

Linux Lite 4.4 OS Setup

# cat /etc/issue
Linux Lite 4.2 LTS \n \l
# apt install nodejs npm
# npm install -g aws-azure-login --unsafe-perm
# chmod -R go+rx $(npm root -g)
# apt install awscli 

Configure Named Profile (First Time)

$ aws-azure-login --profile awsaccount1 --configure
Configuring profile ‘awsaccount1’
? Azure Tenant ID: domain1.com
? Azure App ID URI: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? Default Username: myaccount@domain1.com
? Default Role ARN (if multiple): 
arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role
? Default Session Duration Hours (up to 12): 12
Profile saved.

Login with Named Profile

$ aws-azure-login --profile awsaccount1
Logging in with profile ‘awsaccount1’...
? Username: myaccount1@mydomain1.com
? Password: [hidden]
We texted your phone +X XXXXXXXXXX. Please enter the code to sign in.
? Verification Code: 213194
? Role: arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role
? Session Duration Hours (up to 12): 12
Assuming role arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role

Update Credentials File For Different Accounts to Switch Roles To

$ cat .aws/credentials 
[awsaccount2]
region=us-east-1
role_arn=arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin
source_profile=awsaccount1

[awsaccount3]
region=us-east-1
role_arn=arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin
source_profile=awsaccount1

[awsaccount1]
aws_access_key_id=XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
aws_session_token="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=="
aws_session_expiration=2019-04-18T10:22:06.000Z

Test Access

$ aws iam list-account-aliases --profile awsaccount2
{
    "AccountAliases": [
        "awsaccount2"
    ]
}
$ aws iam list-account-aliases --profile awsaccount3
{
    "AccountAliases": [
        "awsaccount3"
    ]
}

So next time just login with the named profile awsaccount1 and you have AWS CLI to the other accounts. Note you will need to make sure ARN's and roles etc are 100% accurate. It gets a bit confusing.

Also this is informational and you carry your own risks of accessing the wrong account.

Comments Off on Azure AD SSO Login to AWS CLI
comments

Mar 25

Bash History Plus Comment

If you like using Control-R in bash to find previous commands here is a useful tip. You can add a comment to a command and then when you use Control-R searching by typing you can find it by your comment. Example I use apt update. Run command including your comment (shell will ignore the comment of course). Then when Control-R searching type your string you used in the comment.

 # apt update ; apt upgrade #quickupdate

Now hit Control-R and type to search "quick".

(reverse-i-search)`quick': apt update ; apt upgrade #quickupdate

Comments Off on Bash History Plus Comment
comments

Mar 21

Powerline Font Issue

I am using Powerline in my terminals and had an issue with the font. After messing with it I realize I was using a font that has not been patched for Powerline. I changed the gnome-terminal font from "Fira Mono Regular"
to "DejaVu Sans Mono Book" and it worked.

My steps on Pop! OS which is an Ubuntu 18.10 flavor.

# apt install powerline
$  rrosso  ~  tail .bashrc
# Powerline
if [ -f /usr/share/powerline/bindings/bash/powerline.sh ]; then
    source /usr/share/powerline/bindings/bash/powerline.sh
fi

Note you may need to regenrate the font cache or restart X

Comments Off on Powerline Font Issue
comments

Mar 19

Quick Backup and Purge

I highly recommend using restic instead of what I am talking about here.

Mostly I am just documenting this for my own reference and this is not a great backup solution by any means. Also note:

  1. This script is creating backups local of course the idea would be to adapt the script to use NFS or even better object storage.
  2. This is just a staring point for example if you would like to write very small datasets (like /etc) and also purge older backups.
  3. Adapt for your own policies I have kind of used a gold policy here(7 daily, 4 weekly, 12 monthly, 5 yearly).
  4. Purging should perhaps rather be done by actual file dates and not by counting.
#!/usr/bin/python
#
#: Script Name  : tarBak.py
#: Author       : Riaan Rossouw
#: Date Created : March 13, 2019
#: Date Updated : March 13, 2019
#: Description  : Python Script to manage tar backups
#: Examples     : tarBak.py -t target -f folders -c
#:              : tarBak.py --target <backup folder> --folders <folders> --create

import optparse, os, glob, sys, re, datetime
import tarfile
import socket

__version__ = '0.9.1'
optdesc = 'This script is used to manage tar backups of files'

parser = optparse.OptionParser(description=optdesc,version=os.path.basename(__file__) + ' ' + __version__)
parser.formatter.max_help_position = 50
parser.add_option('-t', '--target', help='Specify Target', dest='target', action='append')
parser.add_option('-f', '--folders', help='Specify Folders', dest='folders', action='append')
parser.add_option('-c', '--create', help='Create a new backup', dest='create', action='store_true',default=False)
parser.add_option('-p', '--purge', help='Purge older backups per policy', dest='purge', action='store_true',default=False)
parser.add_option('-g', '--group', help='Policy group', dest='group', action='append')
parser.add_option('-l', '--list', help='List backups', dest='listall', action='store_true',default=False)
opts, args = parser.parse_args()

def make_tarfile(output_filename, source_dirs):
  with tarfile.open(output_filename, "w:gz") as tar:
    for source_dir in source_dirs:
      tar.add(source_dir, arcname=os.path.basename(source_dir))

def getBackupType(backup_time_created):
  utc,mt = str(backup_time_created).split('.')
  d = datetime.datetime.strptime(utc, '%Y-%m-%d %H:%M:%S').date()
  dt = d.strftime('%a %d %B %Y')

  if d.weekday() == 6:
    backup_t = 'WEEKLY'
  elif d.day == 1:
    backup_t = 'MONTHLY'
  elif ( (d.day == 1) and (d.mon == 1) ):
    backup_t = 'YEARLY'
  else:
    backup_t = 'DAILY'

  return (backup_t,dt)

def listBackups(target):
  print ("Listing backup files..")

  files = glob.glob(target + "*DAILY*")
  files.sort(key=os.path.getmtime, reverse=True)

  for file in files:
    print file
  
def purgeBackups(target, group):
  print ("Purging backup files..this needs testing and more logic for SILVER and BRONZE policies?")

  files = glob.glob(target + "*.tgz*")
  files.sort(key=os.path.getmtime, reverse=True)
  daily = 0
  weekly = 0
  monthly = 0
  yearly = 0
 
  for file in files:
    comment = ""
    if ( ("DAILY" in file) or ("WEEKLY" in file) or ("MONTHLY" in file) or ("YEARLY" in file) ):
      #t = file.split("-")[0]
      sub = re.search('files-(.+?)-2019', file)
      #print sub
      t = sub.group(1)
    else:
      t = "MANUAL"

    if t == "DAILY":
      comment = "DAILY"
      daily = daily + 1
      if daily > 7:
        comment = comment + " this one is more than 7 deleting"
        os.remove(file)
    elif t == "WEEKLY":
      comment = "Sun"
      weekly = weekly + 1
      if weekly > 4:
        comment = comment + " this one is more than 4 deleting"
        os.remove(file)
    elif t  == "MONTHLY":
      comment = "01"
      monthly = monthly + 1
      if monthly > 12:
       comment = comment + " this one is more than 12 deleting"
       os.remove(file)
    elif t  == "YEARLY":
      comment = "01"
      yearly = yearly + 1
      if yearly > 5:
       comment = comment + " this one is more than 5 deleting"
       os.remove(file)
    else:
      comment = " manual snapshot not purging"
      
    if  "this one " in comment:
      print ('DELETE: {:25}: {:25}'.format(file, comment) )

def createBackup(target, folders, group):
  print ("creating backup of " + str(folders))
  hostname = socket.gethostname()
  creationDate = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.0")
  t,ds = getBackupType(creationDate)
  BackupName = target + "/" + hostname + '-files-' + t + "-" + datetime.datetime.now().strftime("%Y%m%d-%H%MCST") + '.tgz'

  proceed = "SNAPSHOT NOT NEEDED AT THIS TIME PER THE POLICY"
  if ( group == "BRONZE") and ( (t == "MONTHLY") or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif ( group == "SILVER" and (t == "WEEKLY") or (t == "MONTHLY" ) or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif group == "GOLD":
    proceed = "CLEAR TO SNAP" 
  else:
    result = proceed
  
  make_tarfile(BackupName, folders)

def main():
  if opts.target:
    target = opts.target[0]
  else:
    print ("\n\n must specify target folder")
    exit(0)

  if opts.listall:
    listBackups(target)
  else:
    if opts.create:
      if opts.folders:
        folders = opts.folders[0].split(',')
      else:
        print ("\n\n must specify folders")
        exit(0)
      createBackup(target, folders, opts.group[0])

    if opts.purge:
      purgeBackups(target, opts.group[0])

if __name__ == '__main__':
  main()

Example cron entry. Use root if you need to backup files only accessible as root.

$ crontab -l | tail -1
0 5 * * * cd /Src/tarBak/ ; python tarBak.py -t /tmp/MyBackups/ -f '/home/rrosso,/var/spool/syslog' -c 2>&1

Comments Off on Quick Backup and Purge
comments

Mar 19

Virtualbox Guest Additions Shared Folders

This may help someone so I am jotting down an issue I had. On some Linux flavors you may see the Virtualbox Guest additions shared folders not auto mounting on your desktop. I have seen this before but recently this was on Linux Lite for me.

The issue is that systemd is not starting the vbox-service. I am not sure if this is the best fix but for now I removed systemd-timesyncd.service from the Conflicts section in the unit file as shown below.

Also note that I completely purged the v5.x guest additions that came standard and installed v6 from the guest additions ISO.

# pwd
/lib/systemd/system
# diff vboxadd-service.service /tmp/vboxadd-service.service 
6c6
< Conflicts=shutdown.target
---
> Conflicts=shutdown.target systemd-timesyncd.service

Update 3/29/19:

Above option means the unit file change will be reversed when you update the guest additions. So another option for now is to disable systemd-timesyncd.service. Not sure if that breaks the guests time sync but sounds like virtualbox guest additions sync time with the host anyhow.

# systemctl disable systemd-timesyncd.service
Removed /etc/systemd/system/sysinit.target.wants/systemd-timesyncd.service.

Comments Off on Virtualbox Guest Additions Shared Folders
comments

Mar 13

Python Tar Backup and Purge

While I was working on a related project to use python to write to cloud object storage and the logic around purging; I jotted down some quick and dirty code here for my reference to build on. Normally I would recommend using the excellent restic program but in this case I am forced to use native API's.

This serves as a reminder for it is only a very elementary tar plus gzip daily backup and subsequent purging of old backups. Just a test.

#!/usr/bin/python
#
#: Script Name  : tarBak.py
#: Author       : Riaan Rossouw
#: Date Created : March 13, 2019
#: Date Updated : March 13, 2019
#: Description  : Python Script to manage tar backups
#: Examples     : tarBak.py -t target -f 'folder1,folder2' -c -g GOLD
#:              : tarBak.py --target <backup folder> --folders <folders> --create

import optparse, os, glob, sys, re, datetime
import tarfile
import socket

__version__ = '0.9.1'
optdesc = 'This script is used to manage tar backups of files'

parser = optparse.OptionParser(description=optdesc,version=os.path.basename(__file__) + ' ' + __version__)
parser.formatter.max_help_position = 50
parser.add_option('-t', '--target', help='Specify Target', dest='target', action='append')
parser.add_option('-f', '--folders', help='Specify Folders', dest='folders', action='append')
parser.add_option('-c', '--create', help='Create a new backup', dest='create', action='store_true',default=False)
parser.add_option('-p', '--purge', help='Purge older backups per policy', dest='purge', action='store_true',default=False)
parser.add_option('-g', '--group', help='Policy group', dest='group', action='append')
parser.add_option('-l', '--list', help='List backups', dest='listall', action='store_true',default=False)
opts, args = parser.parse_args()

def make_tarfile(output_filename, source_dirs):
  with tarfile.open(output_filename, "w:gz") as tar:
    for source_dir in source_dirs:
      tar.add(source_dir, arcname=os.path.basename(source_dir))

def getBackupType(backup_time_created):
  utc,mt = str(backup_time_created).split('.')
  d = datetime.datetime.strptime(utc, '%Y-%m-%d %H:%M:%S').date()
  dt = d.strftime('%a %d %B %Y')

  if d.weekday() == 6:
    backup_t = 'WEEKLY'
  elif d.day == 1:
    backup_t = 'MONTHLY'
  elif ( (d.day == 1) and (d.mon == 1) ):
    backup_t = 'YEARLY'
  else:
    backup_t = 'DAILY'

  return (backup_t,dt)

def listBackups(target):
  print ("Listing backup files..")

  files = glob.glob(target + "*DAILY*")
  files.sort(key=os.path.getmtime, reverse=True)

  for file in files:
    print file
  
def purgeBackups(target, group):
  print ("Purging backup files..this needs testing and more logic for SILVER and BRONZE policies?")

  files = glob.glob(target + "*.tgz*")
  files.sort(key=os.path.getmtime, reverse=True)
  daily = 0
  weekly = 0
  monthly = 0
  yearly = 0
 
  for file in files:
    comment = ""
    if ( ("DAILY" in file) or ("WEEKLY" in file) or ("MONTHLY" in file) or ("YEARLY" in file) ):
      #t = file.split("-")[0]
      sub = re.search('files-(.+?)-2019', file)
      #print sub
      t = sub.group(1)
    else:
      t = "MANUAL"

    if t == "DAILY":
      comment = "DAILY"
      daily = daily + 1
      if daily > 7:
        comment = comment + " this one is more than 7 deleting"
        os.remove(file)
    elif t == "WEEKLY":
      comment = "Sun"
      weekly = weekly + 1
      if weekly > 4:
        comment = comment + " this one is more than 4 deleting"
        os.remove(file)
    elif t  == "MONTHLY":
      comment = "01"
      monthly = monthly + 1
      if monthly > 12:
       comment = comment + " this one is more than 12 deleting"
       os.remove(file)
    elif t  == "YEARLY":
      comment = "01"
      yearly = yearly + 1
      if yearly > 5:
       comment = comment + " this one is more than 5 deleting"
       os.remove(file)
    else:
      comment = " manual snapshot not purging"
      
    if  "this one " in comment:
      print ('DELETE: {:25}: {:25}'.format(file, comment) )

def createBackup(target, folders, group):
  print ("creating backup of " + str(folders))
  hostname = socket.gethostname()
  creationDate = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.0")
  t,ds = getBackupType(creationDate)
  BackupName = target + "/" + hostname + '-files-' + t + "-" + datetime.datetime.now().strftime("%Y%m%d-%H%MCST") + '.tgz'

  proceed = "SNAPSHOT NOT NEEDED AT THIS TIME PER THE POLICY"
  if ( group == "BRONZE") and ( (t == "MONTHLY") or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif ( group == "SILVER" and (t == "WEEKLY") or (t == "MONTHLY" ) or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif group == "GOLD":
    proceed = "CLEAR TO SNAP" 
  else:
    result = proceed
  
  make_tarfile(BackupName, folders)

def main():
  if opts.target:
    target = opts.target[0]
  else:
    print ("\n\n must specify target folder")
    exit(0)

  if opts.listall:
    listBackups(target)
  else:
    if opts.create:
      if opts.folders:
        folders = opts.folders[0].split(',')
      else:
        print ("\n\n must specify folders")
        exit(0)
      createBackup(target, folders, opts.group[0])

    if opts.purge:
      purgeBackups(target, opts.group[0])

if __name__ == '__main__':
  main()

And running it like this:

$ python tarBak.py -t /tmp/MyBackups/ -f '/home/rrosso,/var/log/syslog' -g GOLD -c
creating backup of ['/home/rrosso', '/var/log/syslog']

$ python tarBak.py -t /tmp/MyBackups/ -p -g GOLD
Purging backup files..this needs testing and more logic for SILVER and BRONZE policies?
DELETE: /tmp/MyBackups/xubuntu32-files-DAILY-20190313-1420CST.tgz: DAILY this one is more than 7 deleting
$ crontab -l | tail -1
0 5 * * * cd /Src/tarBak/ ; python tarBak.py -t /MyBackups/ -f '/home/rrosso,/var/spool/syslog' -c -p -g GOLD 2>&1

Comments Off on Python Tar Backup and Purge
comments

Mar 05

ZFS on Linux SMB Sharing

Having worked on and liked ZFS for a long time I am now using ZFS on my main Linux desktop. I thought it would be nice if I can just turn on SMB sharing using ZFS but after playing with this for a while I gave up. Seems like one person on the Internet said it best just let ZFS take care of the file-system and let Samba take care of SMB sharing. I came to the same conclusion. I am recording some of my notes and commands for my reference maybe someone else find it useful.

Un-mount the old ext4 partition and create a pool. Of course don't create a pool on a disk you have DATA on!

# umount /DATA 
# fdisk -l | grep sd
# zpool create -m /DATA DATA /dev/sdb1
# zpool create -f -m /DATA DATA /dev/sdb1

Turn sharing on the ZFS way.

# apt install samba
# zfs set sharesmb=on DATA
# pdbedit -a rrosso

I get parameter is incorrect from a Windows client. Gave up on this and shared using smb.conf.

# zfs set sharesmb=off DATA
# zfs get sharesmb DATA
NAME  PROPERTY  VALUE     SOURCE
DATA  sharesmb  off       local

# tail -10 /etc/samba/smb.conf 
[DATA]
path = /DATA
public = yes
writable = yes
create mask = 0775
directory mask = 0775

# systemctl restart smbd
# net usershare list

# testparm 
{..}
[DATA]
	create mask = 0775
	directory mask = 0775
	guest ok = Yes
	path = /DATA
	read only = No

Note some commands and locations for troubleshooting.

# smbstatus 
# testparm
# cat /etc/dfs/sharetab 
# net usershare list
# ls /var/lib/samba/usershares/
# cat /var/lib/samba/usershares/data 
# pdbedit -L

Comments Off on ZFS on Linux SMB Sharing
comments

Jan 18

AWS Storage Gateway Test

I recently wanted to take a quick look at the File Gateway. It is described as "Store files as objects in Amazon S3, with a local cache for low-latency access to your most recently used data." I tried it on Virtualbox using the Vmware ESXi Image they offer.

Steps:

  • Download VMware ESXi Image.
  • With Virtualbox Import OVA AWS-Appliance-2018-12-11-1544560738.ova
  • Adjust memory 16 -> 10. Try not to do this if possible but in my case I was short on memory on the host.
  • Change to bridged networking instead of NAT.
  • Add a SAS controller and thick provisioned a disk. I did type VDI and 8GB for my test.
  • Use the SAS disk attached to the Virtualbox VM as cache in the AWS Storage Gateway console.
  • Share files as NFS (SMB you will need MS-AD)

Some useful CLI commands

$ aws storagegateway list-gateways
{
    "Gateways": [
        {
            "GatewayId": "sgw-<...>",
            "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>",
            "GatewayType": "FILE_S3",
            "GatewayOperationalState": "ACTIVE",
            "GatewayName": "iq-st01"
        }
    ]
}

$ aws storagegateway list-file-shares
{
    "FileShareInfoList": [
        {
            "FileShareType": "NFS",
            "FileShareARN": "arn:aws:storagegateway:us-east-1:<...>:share/share-<...>",
            "FileShareId": "share-<...>",
            "FileShareStatus": "AVAILABLE",
            "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>"
        }
    ]
}

$ aws storagegateway list-local-disks --gateway-arn arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>
{
    "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>",
    "Disks": [
        {
            "DiskId": "pci-0000:00:16.0-sas-0x00060504030201a0-lun-0",
            "DiskPath": "/dev/sda",
            "DiskNode": "SCSI (0:0)",
            "DiskStatus": "present",
            "DiskSizeInBytes": 8589934592,
            "DiskAllocationType": "CACHE STORAGE"
        }
    ]
}

Mount test

# mount -t nfs -o nolock,hard 192.168.1.25:/st01.iqonda.com  /mnt/st01
# nfsstat -m
/mnt/st01 from 192.168.1.25:/st01.iqonda.com
 Flags:	rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.15,local_lock=none,addr=192.168.1.25

Comments Off on AWS Storage Gateway Test
comments