Category: Python

Dec 04

python append key

Python Append Key

Building a dict and ordering it into groups by key is sometimes very useful.

Teh following code show using the if .. in option and defaultdict option of checkign for a key when loading the dictionary. Although people warn that using has_key or if .. in type checks slows it down a lot my timings was fairly similar. I left some commented code in for my own reference.

source

from collections import defaultdict
from time import time

sample_size = 10000000

dct1 = defaultdict(list)
dct1 = {}
st_time = time()

for i in range(1, sample_size):
    s = str(i)
    key = s[0:1]
    name = 'server' + s
    dct1.setdefault(key, []).append({'name':name,'status':'RUNNING'})  # returns None!

print (f"\ndct1 defaultdict option: {time() - st_time}")

#print (dct1)  
# get one key
#one_key = dct1.get('2')
#print (one_key)
#for v in one_key:
#  print (v)

# print by key
#for k,v in dct1.items():
#  print("\nkey == {}".format(k))
#  print (v)
#  #for i in v:
#  #  print("  {} {}".format(i["name"], i["status"]))

dct2 = {}
st_time = time()

for i in range(1, sample_size):
    s = str(i)
    key = s[0:1]
    name = 'server' + s
    if key in dct2:
      dct2[key].append({'name':name,'status': 'STOPPED'})
    else:
      dct2.update({key: [{'name': name,'status': 'STOPPED'}]})

print (f"\ndct2 if .. in option: {time() - st_time}")

#print (dct2)
#one_key = dct1.get('1')
#print (one_key)
#for v in one_key:
#  print (v)

# print by key
#for k,v in dct2.items():
#  print("\nkey == {}".format(k))
#  print (v)
#  #for i in v:
#  #  print("  {} {}".format(i["name"], i["status"]))

test

py-assoc-arr$ python3 py-keyed-dict-timing.py 

dct1 defaultdict option: 6.392352342605591

dct2 if .. in option: 6.472132921218872

Comments Off on python append key
comments

Jul 24

AWS SNS to http subscription receiving in python3 http server and Flask

I wanted an easier way to test and manipulate a notification using an AWS SNS subscription. Mostly I do a quick SMTP subscription to the topic. I wanted quicker and more direct feedback and also manipulate the incoming notification. I used this as reference https://gist.github.com/iMilnb/bf27da3f38272a76c801

NOTE: the code will detect if it is a subscription or notification request and route appropriately.

run a server on port 5000

$ mkdir snsread
$ cd snsread/
$ vi snsready.py
$ python3 -m venv venv
$ . venv/bin/activate

(venv) [ec2-user@ip-172-31-6-74 snsread]$ pip3 install Flask
...

(venv) [ec2-user@ip-172-31-6-74 snsread]$ pip3 install requests
...

(venv) [ec2-user@ip-172-31-6-74 snsread]$ python3 snsread.py 
 * Serving Flask app 'snsread' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
 * Running on all addresses.
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on http://172.31.6.74:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 729-981-989

test using curl from public client

$ curl -I ec2-54-189-23-28.us-west-2.compute.amazonaws.com:5000
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 3
Server: Werkzeug/2.0.1 Python/3.7.10
Date: Fri, 23 Jul 2021 21:38:20 GMT

when testing curl request python server shows

99.122.138.75 - - [23/Jul/2021 21:38:20] "HEAD / HTTP/1.1" 200 -

when testing publish direct from SNS topic the python server shows

205.251.234.35 - - [23/Jul/2021 21:41:26] "POST / HTTP/1.1" 200 -

Add subscription in topic rr-events-02 as http://ec2-54-189-23-28.us-west-2.compute.amazonaws.com:5000

server shows during subscription

205.251.234.35 - - [23/Jul/2021 21:41:26] "POST / HTTP/1.1" 200 -

topic > publish message

server shows

raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

NOTE: reference code was not exactly matching my specific notifications so need some tweaking  as well as some json.loads(json.dumps())) love.

initial successful from actual Cloudwatch alarm sent to SNS

(venv) [ec2-user@ip-172-31-6-74 snsread]$ python3 snsread.py 
 * Serving Flask app 'snsread' (lazy loading)
...
incoming...
headers: 
json payload: 
js: {"AlarmName":"linux-server-errors","AlarmDescription":"ERROR in /var/log/messages","AWSAccountId":"310843369992","NewStateValue":"ALARM","NewStateReason":"Threshold Crossed: 1 out of the last 1 datapoints [1.0 (24/07/21 00:29:00)] was greater than the threshold (0.0) (minimum 1 datapoint for OK -> ALARM transition).","StateChangeTime":"2021-07-24T00:34:13.630+0000","Region":"US West (Oregon)","AlarmArn":"arn:aws:cloudwatch:us-west-2:310843369992:alarm:linux-server-errors","OldStateValue":"OK","Trigger":{"MetricName":"messages-errors","Namespace":"messages","StatisticType":"Statistic","Statistic":"MAXIMUM","Unit":null,"Dimensions":[],"Period":300,"EvaluationPeriods":1,"ComparisonOperator":"GreaterThanThreshold","Threshold":0.0,"TreatMissingData":"- TreatMissingData:                    notBreaching","EvaluateLowSampleCountPercentile":""}}
205.251.233.161 - - [24/Jul/2021 00:34:13] "POST / HTTP/1.1" 200 -

Publish message directly from topic in console

(venv) [ec2-user@ip-172-31-6-74 snsread]$ python3 snsread.py 
 * Serving Flask app 'snsread' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
 * Running on all addresses.
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on http://172.31.6.74:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 729-981-989

incoming traffic...
*********************
data
----
{
    "Message": "my raw msg...",
    "MessageId": "149d65b0-9a32-524a-95ac-3cc5ea934e04",
    "Signature": "JZp1x1mOXw2PjwhIFfA4QmNc74pzai5G3kbXyYvnNW1a5YkexGKSCpmYLT/LEFqxfJy6VFYDGmb/+Ty2aQO0qQlO2wd5D+SkZOHjNAs0u+lCuw+cOBYCtyRAWJI3c5zGR928WE4PuWEoNgg8NQnW9RBRkCEqcEgQChjgbZlxs2ehvl1LZ/1rkcWzG3+/p5wZL0czhkRA2dx5JeM7d2zCuFisp+2rQN6aRfRObV0YcBqBVFwUmL2C7uxgPt6TTf4nfpgFqDKrV6S/BfOJqWTNKDkUKvUQCk5inxOOOpFmDs2V6LhkV1kRGgXAx5moQTWTTAc/CC+1N8ylXyUdES4fAA==",
    "SignatureVersion": "1",
    "SigningCertURL": "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-010a507c1833636cd94bdb98bd93083a.pem",
    "Subject": "my msg",
    "Timestamp": "2021-07-24T01:35:29.357Z",
    "TopicArn": "arn:aws:sns:us-west-2:310843369992:rr-events-02",
    "Type": "Notification",
    "UnsubscribeURL": "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:310843369992:rr-events-02:a0fbe74a-a10a-4d85-a405-e0627a7e075c"
}
headers
-------
X-Amz-Sns-Message-Type: Notification
X-Amz-Sns-Message-Id: 149d65b0-9a32-524a-95ac-3cc5ea934e04
X-Amz-Sns-Topic-Arn: arn:aws:sns:us-west-2:310843369992:rr-events-02
X-Amz-Sns-Subscription-Arn: arn:aws:sns:us-west-2:310843369992:rr-events-02:a0fbe74a-a10a-4d85-a405-e0627a7e075c
Content-Length: 946
Content-Type: text/plain; charset=UTF-8
Host: ec2-54-189-23-28.us-west-2.compute.amazonaws.com:5000
Connection: Keep-Alive
User-Agent: Amazon Simple Notification Service Agent
Accept-Encoding: gzip,deflate

json payload
------------
None
js
--
{
    "Message": "my raw msg...",
    "MessageId": "149d65b0-9a32-524a-95ac-3cc5ea934e04",
    "Signature": "JZp1x1mOXw2PjwhIFfA4QmNc74pzai5G3kbXyYvnNW1a5YkexGKSCpmYLT/LEFqxfJy6VFYDGmb/+Ty2aQO0qQlO2wd5D+SkZOHjNAs0u+lCuw+cOBYCtyRAWJI3c5zGR928WE4PuWEoNgg8NQnW9RBRkCEqcEgQChjgbZlxs2ehvl1LZ/1rkcWzG3+/p5wZL0czhkRA2dx5JeM7d2zCuFisp+2rQN6aRfRObV0YcBqBVFwUmL2C7uxgPt6TTf4nfpgFqDKrV6S/BfOJqWTNKDkUKvUQCk5inxOOOpFmDs2V6LhkV1kRGgXAx5moQTWTTAc/CC+1N8ylXyUdES4fAA==",
    "SignatureVersion": "1",
    "SigningCertURL": "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-010a507c1833636cd94bdb98bd93083a.pem",
    "Subject": "my msg",
    "Timestamp": "2021-07-24T01:35:29.357Z",
    "TopicArn": "arn:aws:sns:us-west-2:310843369992:rr-events-02",
    "Type": "Notification",
    "UnsubscribeURL": "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:310843369992:rr-events-02:a0fbe74a-a10a-4d85-a405-e0627a7e075c"
}
54.240.230.187 - - [24/Jul/2021 01:36:11] "POST / HTTP/1.1" 200 -

from linux server custom json

[rrosso@fedora ~]$ curl -i -H "Content-Type: application/json" -X POST -d '{"userId":"1", "username": "fizz bizz"}' http://ec2-54-189-23-28.us-west-2.compute.amazonaws.com:5000
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 3
Server: Werkzeug/2.0.1 Python/3.7.10
Date: Sat, 24 Jul 2021 00:59:14 GMT

OK

server shows

(venv) [ec2-user@ip-172-31-6-74 snsread]$ python3 snsread.py 
 * Serving Flask app 'snsread' (lazy loading)
....

incoming traffic...
data: 
{'userId': '1', 'username': 'fizz bizz'}
headers: 
Host: ec2-54-189-23-28.us-west-2.compute.amazonaws.com:5000
User-Agent: curl/7.76.1
Accept: */*
Content-Type: application/json
Content-Length: 39

json payload: {'userId': '1', 'username': 'fizz bizz'}
99.122.138.75 - - [24/Jul/2021 00:55:44] "POST / HTTP/1.1" 200 -

initial successful from actual Cloudwatch alarm sent to SNS. injected to server messages with logger

[root@ip-172-31-6-74 ~]# logger "ERROR: WTF6 is going on..."

server shows

(venv) [ec2-user@ip-172-31-6-74 snsread]$ python3 snsread.py 
 * Serving Flask app 'snsread' (lazy loading)
...

incoming traffic...
*********************
data
----
{
    "Message": "{\"AlarmName\":\"linux-server-errors\",\"AlarmDescription\":\"ERROR in /var/log/messages\",\"AWSAccountId\":\"310843369992\",\"NewStateValue\":\"ALARM\",\"NewStateReason\":\"Threshold Crossed: 1 out of the last 1 datapoints [1.0 (24/07/21 01:34:00)] was greater than the threshold (0.0) (minimum 1 datapoint for OK -> ALARM transition).\",\"StateChangeTime\":\"2021-07-24T01:39:13.642+0000\",\"Region\":\"US West (Oregon)\",\"AlarmArn\":\"arn:aws:cloudwatch:us-west-2:310843369992:alarm:linux-server-errors\",\"OldStateValue\":\"OK\",\"Trigger\":{\"MetricName\":\"messages-errors\",\"Namespace\":\"messages\",\"StatisticType\":\"Statistic\",\"Statistic\":\"MAXIMUM\",\"Unit\":null,\"Dimensions\":[],\"Period\":300,\"EvaluationPeriods\":1,\"ComparisonOperator\":\"GreaterThanThreshold\",\"Threshold\":0.0,\"TreatMissingData\":\"- TreatMissingData:                    notBreaching\",\"EvaluateLowSampleCountPercentile\":\"\"}}",
    "MessageId": "7ec65853-62c5-5baf-9155-01261344a002",
    "Signature": "mbPoUMIYpiC3DqNCft7ZgRHP9vAEyWmWhXjpeTPZxSehoB+1o4rhxWLyblugHhbJOAkZrV9sp52JIJfN2d2h7WqCXKeZxVsqqpvL1HdTWc8yCo5yWbZ/hKibKR1A7DdXZFeyiQpnfD71sYsiFmB59lKfAi2l8f9PZDdx/GoOboIUSoR4gwFigyEnL9E4V9C6WKb6ERXSkbwmKyMzTF82BqmsYMhXyOZXysjaqQ9Eleqh+1hv0MqUw3mPCI9IIjoHjFN7CmtrPJpf5RaYI12W1KsBUYrWI6MZQ69gwohgyvFwSRAyT9z/z++AyMebROY3S5Fl29B+Zawfp5L44b1zzA==",
    "SignatureVersion": "1",
    "SigningCertURL": "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-010a507c1833636cd94bdb98bd93083a.pem",
    "Subject": "ALARM: \"linux-server-errors\" in US West (Oregon)",
    "Timestamp": "2021-07-24T01:39:13.690Z",
    "TopicArn": "arn:aws:sns:us-west-2:310843369992:rr-events-02",
    "Type": "Notification",
    "UnsubscribeURL": "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:310843369992:rr-events-02:a0fbe74a-a10a-4d85-a405-e0627a7e075c"
}
headers
-------
X-Amz-Sns-Message-Type: Notification
X-Amz-Sns-Message-Id: 7ec65853-62c5-5baf-9155-01261344a002
X-Amz-Sns-Topic-Arn: arn:aws:sns:us-west-2:310843369992:rr-events-02
X-Amz-Sns-Subscription-Arn: arn:aws:sns:us-west-2:310843369992:rr-events-02:a0fbe74a-a10a-4d85-a405-e0627a7e075c
Content-Length: 1901
Content-Type: text/plain; charset=UTF-8
Host: ec2-54-189-23-28.us-west-2.compute.amazonaws.com:5000
Connection: Keep-Alive
User-Agent: Amazon Simple Notification Service Agent
Accept-Encoding: gzip,deflate

json payload
------------
None
js
--
{
    "Message": "{\"AlarmName\":\"linux-server-errors\",\"AlarmDescription\":\"ERROR in /var/log/messages\",\"AWSAccountId\":\"310843369992\",\"NewStateValue\":\"ALARM\",\"NewStateReason\":\"Threshold Crossed: 1 out of the last 1 datapoints [1.0 (24/07/21 01:34:00)] was greater than the threshold (0.0) (minimum 1 datapoint for OK -> ALARM transition).\",\"StateChangeTime\":\"2021-07-24T01:39:13.642+0000\",\"Region\":\"US West (Oregon)\",\"AlarmArn\":\"arn:aws:cloudwatch:us-west-2:310843369992:alarm:linux-server-errors\",\"OldStateValue\":\"OK\",\"Trigger\":{\"MetricName\":\"messages-errors\",\"Namespace\":\"messages\",\"StatisticType\":\"Statistic\",\"Statistic\":\"MAXIMUM\",\"Unit\":null,\"Dimensions\":[],\"Period\":300,\"EvaluationPeriods\":1,\"ComparisonOperator\":\"GreaterThanThreshold\",\"Threshold\":0.0,\"TreatMissingData\":\"- TreatMissingData:                    notBreaching\",\"EvaluateLowSampleCountPercentile\":\"\"}}",
    "MessageId": "7ec65853-62c5-5baf-9155-01261344a002",
    "Signature": "mbPoUMIYpiC3DqNCft7ZgRHP9vAEyWmWhXjpeTPZxSehoB+1o4rhxWLyblugHhbJOAkZrV9sp52JIJfN2d2h7WqCXKeZxVsqqpvL1HdTWc8yCo5yWbZ/hKibKR1A7DdXZFeyiQpnfD71sYsiFmB59lKfAi2l8f9PZDdx/GoOboIUSoR4gwFigyEnL9E4V9C6WKb6ERXSkbwmKyMzTF82BqmsYMhXyOZXysjaqQ9Eleqh+1hv0MqUw3mPCI9IIjoHjFN7CmtrPJpf5RaYI12W1KsBUYrWI6MZQ69gwohgyvFwSRAyT9z/z++AyMebROY3S5Fl29B+Zawfp5L44b1zzA==",
    "SignatureVersion": "1",
    "SigningCertURL": "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-010a507c1833636cd94bdb98bd93083a.pem",
    "Subject": "ALARM: \"linux-server-errors\" in US West (Oregon)",
    "Timestamp": "2021-07-24T01:39:13.690Z",
    "TopicArn": "arn:aws:sns:us-west-2:310843369992:rr-events-02",
    "Type": "Notification",
    "UnsubscribeURL": "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:310843369992:rr-events-02:a0fbe74a-a10a-4d85-a405-e0627a7e075c"
}
54.240.230.240 - - [24/Jul/2021 01:39:34] "POST / HTTP/1.1" 200 -

Comments Off on AWS SNS to http subscription receiving in python3 http server and Flask
comments

Apr 17

Token Balance Decimals

Balance Decimals

Noting my python code for reference. If familiar with crypto currency tokens you may find the token balance does not have decimals but rather store the decimal value separately.

For example this query shows the balance:

$ curl -s https://api.ethplorer.io/getAddressInfo/0xbcB79558e0d66475882A36FaF4124Ec45aA70dA3\?apiKey\=freekey | jq -r '.tokens[0].balance' 
1001193304561787500000

If you look at the token detail you see the decimals recorded:

$ curl -s https://api.ethplorer.io/getAddressInfo/0xbcB79558e0d66475882A36FaF4124Ec45aA70dA3\?apiKey\=freekey | jq -r '.tokens[0].tokenInfo.decimals'
18

More on the decimals here:
The Technology Behind Ethereum Tokens

For my python3 code I used as follow:

$ python3                                                                                                                         
Python 3.8.6 (default, Jan 27 2021, 15:42:20) 
[GCC 10.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 1001193304561787500000*10**-18
1001.1933045617875

Comments Off on Token Balance Decimals
comments

Mar 22

Terraform Stateserver Using Python

Terraform can utilize a http backend for maintaining state. This is a test of a Terraform http backend using a server implemented with python.

NOTE: checked source into https://github.com/rrossouw01/terraform-stateserver-py/

recipe and components

Using Virtualbox Ubuntu 20.10 and followed links:

setup

$ mkdir tf-state-server
$ cd tf-state-server

$ virtualenv -p python3 venv
created virtual environment CPython3.8.6.final.0-64 in 174ms
  creator CPython3Posix(dest=/home/rrosso/tf-state-server/venv, clear=False, global=False)
  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/rrosso/.local/share/virtualenv)
    added seed packages: pip==20.1.1, pkg_resources==0.0.0, setuptools==44.0.0, wheel==0.34.2
  activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator

$ source venv/bin/activate

(venv) $ pip install -U -r requirements.txt
Collecting flask
  Using cached Flask-1.1.2-py2.py3-none-any.whl (94 kB)
Collecting flask_restful
  Downloading Flask_RESTful-0.3.8-py2.py3-none-any.whl (25 kB)
Collecting itsdangerous>=0.24
  Using cached itsdangerous-1.1.0-py2.py3-none-any.whl (16 kB)
Collecting Werkzeug>=0.15
  Using cached Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
Collecting Jinja2>=2.10.1
  Using cached Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)
Collecting click>=5.1
  Using cached click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting pytz
  Downloading pytz-2021.1-py2.py3-none-any.whl (510 kB)
     |████████████████████████████████| 510 kB 3.0 MB/s 
Collecting six>=1.3.0
  Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting aniso8601>=0.82
  Downloading aniso8601-9.0.1-py2.py3-none-any.whl (52 kB)
     |████████████████████████████████| 52 kB 524 kB/s 
Collecting MarkupSafe>=0.23
  Using cached MarkupSafe-1.1.1-cp38-cp38-manylinux2010_x86_64.whl (32 kB)
Installing collected packages: itsdangerous, Werkzeug, MarkupSafe, Jinja2, click, flask, pytz, six, aniso8601, flask-restful
Successfully installed Jinja2-2.11.3 MarkupSafe-1.1.1 Werkzeug-1.0.1 aniso8601-9.0.1 click-7.1.2 flask-1.1.2 flask-restful-0.3.8 itsdangerous-1.1.0 pytz-2021.1 six-1.15.0

(venv) $ python3 stateserver.py
 * Serving Flask app "stateserver" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://192.168.1.235:5000/ (Press CTRL+C to quit)
...

terraform point to remote http

➜  cat main.tf 
terraform {
  backend "http" {
    address = "http://192.168.1.235:5000/terraform_state/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a"
    lock_address = "http://192.168.1.235:5000/terraform_lock/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a"
    lock_method = "PUT"
    unlock_address = "http://192.168.1.235:5000/terraform_lock/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a"
    unlock_method = "DELETE"
  }
}

➜  source ../env-vars 

➜  terraform init    

Initializing the backend...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the
  newly configured "http" backend. No existing state was found in the newly
  configured "http" backend. Do you want to copy this state to the new "http"
  backend? Enter "yes" to copy and "no" to start with an empty state.

  Enter a value: yes

Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.oci: version = "~> 4.17"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

server shows
---
````bash
...
192.168.1.111 - - [16/Mar/2021 10:50:00] "POST /terraform_state/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a?ID=84916e49-1b44-1b32-2058-62f28e1e8ee7 HTTP/1.1" 200 -
192.168.1.111 - - [16/Mar/2021 10:50:00] "DELETE /terraform_lock/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a HTTP/1.1" 200 -
192.168.1.111 - - [16/Mar/2021 10:50:00] "GET /terraform_state/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a HTTP/1.1" 200 -
...

$ ls -la .stateserver/
total 24
drwxrwxr-x 2 rrosso rrosso 4096 Mar 16 10:50 .
drwxrwxr-x 4 rrosso rrosso 4096 Mar 16 10:43 ..
-rw-rw-r-- 1 rrosso rrosso 4407 Mar 16 10:50 4cdd0c76-d78b-11e9-9bea-db9cd8374f3a
-rw-rw-r-- 1 rrosso rrosso 5420 Mar 16 10:50 4cdd0c76-d78b-11e9-9bea-db9cd8374f3a.log

$ more .stateserver/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a*
::::::::::::::
.stateserver/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a
::::::::::::::
{
    "lineage": "9b756fb7-e41a-7cd6-d195-d794f377e7be",
    "outputs": {},
    "resources": [
{
"instances": [
{
"attributes": {
"compartment_id": null,
"id": "ObjectStorageNamespaceDataSource-0",
"namespace": "axwscg6apasa"
},
"schema_version": 0
}
],
...
    "serial": 0,
    "terraform_version": "0.12.28",
    "version": 4
}
::::::::::::::
.stateserver/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a.log
::::::::::::::
lock: {
    "Created": "2021-03-16T15:49:34.567178267Z",
    "ID": "7eea0d70-5f53-e475-041b-bcc393f4a92d",
    "Info": "",
    "Operation": "migration destination state",
    "Path": "",
    "Version": "0.12.28",
    "Who": "rrosso@desktop01"
}
unlock: {
    "Created": "2021-03-16T15:49:34.567178267Z",
    "ID": "7eea0d70-5f53-e475-041b-bcc393f4a92d",
    "Info": "",
    "Operation": "migration destination state",
    "Path": "",
    "Version": "0.12.28",
    "Who": "rrosso@desktop01"
}
lock: {
    "Created": "2021-03-16T15:49:45.760917508Z",
    "ID": "84916e49-1b44-1b32-2058-62f28e1e8ee7",
    "Info": "",
    "Operation": "migration destination state",
    "Path": "",
    "Version": "0.12.28",
    "Who": "rrosso@desktop01"
}
state_write: {
    "lineage": "9b756fb7-e41a-7cd6-d195-d794f377e7be",
    "outputs": {},
    "resources": [
{
"instances": [
{
"attributes": {
"compartment_id": null,
"id": "ObjectStorageNamespaceDataSource-0",
"namespace": "axwscg6apasa"
},
"schema_version": 0
}
...

stateserver shows during plan

...
192.168.1.111 - - [16/Mar/2021 10:54:12] "PUT /terraform_lock/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a HTTP/1.1" 200 -
192.168.1.111 - - [16/Mar/2021 10:54:12] "GET /terraform_state/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a HTTP/1.1" 200 -
192.168.1.111 - - [16/Mar/2021 10:54:15] "DELETE /terraform_lock/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a HTTP/1.1" 200 -
...

source

$ more requirements.txt stateserver.py 
::::::::::::::
requirements.txt
::::::::::::::
flask
flask_restful
::::::::::::::
stateserver.py
::::::::::::::
#!/usr/bin/python3

import flask
import flask_restful
import json
import logging
import os

app = flask.Flask(__name__)
api = flask_restful.Api(app)

@app.before_request
def log_request_info():
    headers = []
    for header in flask.request.headers:
        headers.append('%s = %s' % (header[0], header[1]))

    body = flask.request.get_data().decode('utf-8').split('\n')

    app.logger.debug(('%(method)s for %(url)s...\n'
                      '    Header -- %(headers)s\n'
                      '    Body -- %(body)s\n')
                     % {
        'method': flask.request.method,
        'url': flask.request.url,
        'headers': '\n    Header -- '.join(headers),
        'body': '\n           '.join(body)
    })

class Root(flask_restful.Resource):
    def get(self):
        resp = flask.Response(
            'Oh, hello',
            mimetype='text/html')
        resp.status_code = 200
        return resp

class StateStore(object):
    def __init__(self, path):
        self.path = path
        os.makedirs(self.path, exist_ok=True)

    def _log(self, id, op, data):
        log_file = os.path.join(self.path, id) + '.log'
        with open(log_file, 'a') as f:
            f.write('%s: %s\n' %(op, data))

    def get(self, id):
        file = os.path.join(self.path, id)
        if os.path.exists(file):
            with open(file) as f:
                d = f.read()
                self._log(id, 'state_read', {})
                return json.loads(d)
        return None

    def put(self, id, info):
        file = os.path.join(self.path, id)
        data = json.dumps(info, indent=4, sort_keys=True)
        with open(file, 'w') as f:
            f.write(data)
            self._log(id, 'state_write', data)

    def lock(self, id, info):
        # NOTE(mikal): this is racy, but just a demo
        lock_file = os.path.join(self.path, id) + '.lock'
        if os.path.exists(lock_file):
            # If the lock exists, it should be a JSON dump of information about
            # the lock holder
            with open(lock_file) as f:
                l = json.loads(f.read())
            return False, l

        data = json.dumps(info, indent=4, sort_keys=True)
        with open(lock_file, 'w') as f:
            f.write(data)
        self._log(id, 'lock', data)
        return True, {}

    def unlock(self, id, info):
        lock_file = os.path.join(self.path, id) + '.lock'
        if os.path.exists(lock_file):
            os.unlink(lock_file)
            self._log(id, 'unlock', json.dumps(info, indent=4, sort_keys=True))
            return True
        return False

state = StateStore('.stateserver')

class TerraformState(flask_restful.Resource):
    def get(self, tf_id):
        s = state.get(tf_id)
        if not s:
            flask.abort(404)
        return s

    def post(self, tf_id):
        print(flask.request.form)
        s = state.put(tf_id, flask.request.json)
        return {}

class TerraformLock(flask_restful.Resource):
    def put(self, tf_id):
        success, info = state.lock(tf_id, flask.request.json)
        if not success:
            flask.abort(423, info)
        return info

    def delete(self, tf_id):
        if not state.unlock(tf_id, flask.request.json):
            flask.abort(404)
        return {}

api.add_resource(Root, '/')
api.add_resource(TerraformState, '/terraform_state/')
api.add_resource(TerraformLock, '/terraform_lock/')

if __name__ == '__main__':
    # Note this is not run with the flask task runner...
    app.log = logging.getLogger('werkzeug')
    app.log.setLevel(logging.DEBUG)
    #app.run(host='0.0.0.0', debug=True)

Comments Off on Terraform Stateserver Using Python
comments

Jan 30

Python Flask API Using MongoDB

Python Flask API Using MongoDB

After a recent experiment I did to capture the output of jobs with json output from multiple server's to a central collector, I kept my notes for future reference. In short I used my restic backup json format to submit into MOngoDB via a little custom API.

The notes are showing scripts I used whihc of course may not work in your environment they are for reference only.

Setup

  • ubuntu 20.10 virtualbox host (use my snapshot called pre-setup)
  • run setup.sh from ~/mongodb-api
  • run python3 create-db.py from ~/mongodb-api
  • run python3 apiserver.py from ~/mongodb-api
  • test GET and POST with postman from my desktop

source code

$ more setup.sh create-db.py apiserver.py 
::::::::::::::
setup.sh
::::::::::::::
#!/bin/bash
sudo apt install -y gnupg python3-pip python3-flask python3-flask-mongoengine python3-virtualenv
sudo pip install flask pymongo flask_pymongo
sudo snap install robo3t-snap

wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list
sudo apt-get update
sudo apt-get install -y mongodb-org
ps --no-headers -o comm 1
sudo systemctl start mongod
sudo systemctl status mongod
sudo systemctl enable mongod

cd mongodb-api
virtualenv -p python3 venv
source venv/bin/activate

## run in a new terminal one time
#python3 create-db.py

## run in this terminal after db created
#python3 apiserver.py

::::::::::::::
create-db.py
::::::::::::::
# create-db.py

from pymongo import MongoClient

client = MongoClient('mongodb://localhost:27017/')

mydb = client['restic-db']

import datetime
## example json output from a restic backup run
#{"message_type":"summary","files_new":2,"files_changed":1,"files_unmodified":205021,"dirs_new":0,"dirs_changed":6,"dirs_unmodified":28000,"data_blobs":3,"tree_blobs":7,"data_added":17802,"total_files_processed":205024,"total_bytes_processed":6002872966,"total_duration":29.527921058,"snapshot_id":"84de0f00"

myrecord = {
        "message_type": "summary",
    "files_new":2,
    "files_changed":1,
    "files_unmodified":205021,
    "dirs_new":0,
    "dirs_changed":6,
    "dirs_unmodified":28000,
    "data_blobs":3,
    "tree_blobs":7,
    "data_added":17802,
    "total_files_processed":205024,
    "total_bytes_processed":6002872966,
    "total_duration":29.527921058,
    "snapshot_id":"84de0f00",
        "tags" : ["prod", "daily", "weekly"],
        "date_inserted" : datetime.datetime.utcnow(),
        "hostname" : "desktop01"
        }

record_id = mydb.logs.insert_one(myrecord)

print (record_id)
print (mydb.list_collection_names())
::::::::::::::
apiserver.py
::::::::::::::
# mongo.py

from flask import Flask
from flask import jsonify
from flask import request
from flask_pymongo import PyMongo

app = Flask(__name__)

app.config['MONGO_DBNAME'] = 'restic-db'
app.config['MONGO_URI'] = 'mongodb://localhost:27017/restic-db'

mongo = PyMongo(app)

@app.route('/')
def index():
  return jsonify({'result' : 'v0.9.0'})

@app.route('/entry', methods=['GET'])
def get_all_entries():
  entry = mongo.db.logs
  output = []
  for s in entry.find():
    output.append({'hostname' : s['hostname'], 'message_type' : s['message_type']})
  return jsonify({'result' : output})

@app.route('/entry/', methods=['GET'])
def get_one_entry(hostname):
  entry = mongo.db.logs
  s = entry.find_one({'hostname' : hostname})
  if s:
    output = {'hostname' : s['hostname'], 'message_type' : s['message_type']}
  else:
    output = "No such hostname"
  return jsonify({'result' : output})

@app.route('/entry', methods=['POST'])
def add_entry():
  entry = mongo.db.logs
  #details = request.get_json()
  #hostname = details["hostname"]
  hostname = request.json['hostname']
  #hostname = "'" + str(request.values.get("hostname")) + "'"
  message_type = request.json['message_type']
  #message_type = "'" + str(request.values.get("message_type")) + "'"
  #message_type = details.message_type

  import datetime

 ## example json output from a restic backup run #{"message_type":"summary","files_new":2,"files_changed":1,"files_unmodified":205021,"dirs_new":0,"dirs_changed":6,"dirs_unmodified":28000,"data_blobs":3,"tree_blobs":7,"data_added":17802,"total_files_processed":205024,"total_bytes_processed":6002872966,"total_duration":29.527921058,"snapshot_id":"84de0f00"}

  myrecord = {
        "message_type": message_type,
        "files_new":2,
        "files_changed":1,
        "files_unmodified":205021,
        "dirs_new":0,
        "dirs_changed":6,
        "dirs_unmodified":28000,
        "data_blobs":3,
        "tree_blobs":7,
        "data_added":17802,
        "total_files_processed":205024,
        "total_bytes_processed":6002872966,
        "total_duration":29.527921058,
        "snapshot_id":"84de0f00",
        "tags" : ["prod", "daily", "weekly"],
        "date_inserted" : datetime.datetime.utcnow(),
        "hostname" : hostname
        }

  #entry_id = entry.insert_one({'hostname': hostname, 'message_type': message_type})
  entry_id = entry.insert_one(myrecord)

  new_entry = entry.find_one({'_id': entry_id.inserted_id })
  output = {'hostname' : new_entry['hostname'], 'message_type' : new_entry['message_type']}
  return jsonify({'result' : output})

if __name__ == '__main__':
    #app.run(debug=True)
    app.run (host = "192.168.1.235", port = 5000)

Test API's from my desktop using Postman

Comments Off on Python Flask API Using MongoDB
comments

Apr 12

python-scan-text-block-reverse

example finding a block.

start string and reverse search up to next string. then replacing a line inside the block

python example

#!/usr/bin/python
import re
import sys

#v0.9.6
fileName="listener_test.yml"
dir="./unregister"
variable="bar"
block_start='CodeUri: ' + dir
block_end='AWS::Serverless::Function'
rtext = '      AutoPublishCodeSha256: ' + variable + '\n'

with open("listener_test.yml") as ofile:
      lines=ofile.readlines()
      i = len(lines) - 1
      AWSFound = False
      CodeUriFound = False
      AutoFound = False
      unum = 0
      while i >= 0 and not AWSFound:
           if block_start in lines[i]:
             CodeUriFound = True
             unum = i
           if "AutoPublishCodeSha256:" in lines[i]:
             AutoFound = True
             unum = i
           if block_end in lines[i] and CodeUriFound:
             AWSFound = True

           i -= 1

if AutoFound:
  lines[unum] = rtext
else:
  lines.insert(unum - 1, rtext)

with open('listener_test_new.yml', 'w') as file:
  lines = "".join(lines)
  file.write(lines)

Comments Off on python-scan-text-block-reverse
comments

Mar 19

Quick Backup and Purge

I highly recommend using restic instead of what I am talking about here.

Mostly I am just documenting this for my own reference and this is not a great backup solution by any means. Also note:

  1. This script is creating backups local of course the idea would be to adapt the script to use NFS or even better object storage.
  2. This is just a staring point for example if you would like to write very small datasets (like /etc) and also purge older backups.
  3. Adapt for your own policies I have kind of used a gold policy here(7 daily, 4 weekly, 12 monthly, 5 yearly).
  4. Purging should perhaps rather be done by actual file dates and not by counting.
#!/usr/bin/python
#
#: Script Name  : tarBak.py
#: Author       : Riaan Rossouw
#: Date Created : March 13, 2019
#: Date Updated : March 13, 2019
#: Description  : Python Script to manage tar backups
#: Examples     : tarBak.py -t target -f folders -c
#:              : tarBak.py --target <backup folder> --folders <folders> --create

import optparse, os, glob, sys, re, datetime
import tarfile
import socket

__version__ = '0.9.1'
optdesc = 'This script is used to manage tar backups of files'

parser = optparse.OptionParser(description=optdesc,version=os.path.basename(__file__) + ' ' + __version__)
parser.formatter.max_help_position = 50
parser.add_option('-t', '--target', help='Specify Target', dest='target', action='append')
parser.add_option('-f', '--folders', help='Specify Folders', dest='folders', action='append')
parser.add_option('-c', '--create', help='Create a new backup', dest='create', action='store_true',default=False)
parser.add_option('-p', '--purge', help='Purge older backups per policy', dest='purge', action='store_true',default=False)
parser.add_option('-g', '--group', help='Policy group', dest='group', action='append')
parser.add_option('-l', '--list', help='List backups', dest='listall', action='store_true',default=False)
opts, args = parser.parse_args()

def make_tarfile(output_filename, source_dirs):
  with tarfile.open(output_filename, "w:gz") as tar:
    for source_dir in source_dirs:
      tar.add(source_dir, arcname=os.path.basename(source_dir))

def getBackupType(backup_time_created):
  utc,mt = str(backup_time_created).split('.')
  d = datetime.datetime.strptime(utc, '%Y-%m-%d %H:%M:%S').date()
  dt = d.strftime('%a %d %B %Y')

  if d.weekday() == 6:
    backup_t = 'WEEKLY'
  elif d.day == 1:
    backup_t = 'MONTHLY'
  elif ( (d.day == 1) and (d.mon == 1) ):
    backup_t = 'YEARLY'
  else:
    backup_t = 'DAILY'

  return (backup_t,dt)

def listBackups(target):
  print ("Listing backup files..")

  files = glob.glob(target + "*DAILY*")
  files.sort(key=os.path.getmtime, reverse=True)

  for file in files:
    print file
  
def purgeBackups(target, group):
  print ("Purging backup files..this needs testing and more logic for SILVER and BRONZE policies?")

  files = glob.glob(target + "*.tgz*")
  files.sort(key=os.path.getmtime, reverse=True)
  daily = 0
  weekly = 0
  monthly = 0
  yearly = 0
 
  for file in files:
    comment = ""
    if ( ("DAILY" in file) or ("WEEKLY" in file) or ("MONTHLY" in file) or ("YEARLY" in file) ):
      #t = file.split("-")[0]
      sub = re.search('files-(.+?)-2019', file)
      #print sub
      t = sub.group(1)
    else:
      t = "MANUAL"

    if t == "DAILY":
      comment = "DAILY"
      daily = daily + 1
      if daily > 7:
        comment = comment + " this one is more than 7 deleting"
        os.remove(file)
    elif t == "WEEKLY":
      comment = "Sun"
      weekly = weekly + 1
      if weekly > 4:
        comment = comment + " this one is more than 4 deleting"
        os.remove(file)
    elif t  == "MONTHLY":
      comment = "01"
      monthly = monthly + 1
      if monthly > 12:
       comment = comment + " this one is more than 12 deleting"
       os.remove(file)
    elif t  == "YEARLY":
      comment = "01"
      yearly = yearly + 1
      if yearly > 5:
       comment = comment + " this one is more than 5 deleting"
       os.remove(file)
    else:
      comment = " manual snapshot not purging"
      
    if  "this one " in comment:
      print ('DELETE: {:25}: {:25}'.format(file, comment) )

def createBackup(target, folders, group):
  print ("creating backup of " + str(folders))
  hostname = socket.gethostname()
  creationDate = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.0")
  t,ds = getBackupType(creationDate)
  BackupName = target + "/" + hostname + '-files-' + t + "-" + datetime.datetime.now().strftime("%Y%m%d-%H%MCST") + '.tgz'

  proceed = "SNAPSHOT NOT NEEDED AT THIS TIME PER THE POLICY"
  if ( group == "BRONZE") and ( (t == "MONTHLY") or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif ( group == "SILVER" and (t == "WEEKLY") or (t == "MONTHLY" ) or (t == "YEARLY") ):
    proceed = "CLEAR TO SNAP" 
  elif group == "GOLD":
    proceed = "CLEAR TO SNAP" 
  else:
    result = proceed
  
  make_tarfile(BackupName, folders)

def main():
  if opts.target:
    target = opts.target[0]
  else:
    print ("\n\n must specify target folder")
    exit(0)

  if opts.listall:
    listBackups(target)
  else:
    if opts.create:
      if opts.folders:
        folders = opts.folders[0].split(',')
      else:
        print ("\n\n must specify folders")
        exit(0)
      createBackup(target, folders, opts.group[0])

    if opts.purge:
      purgeBackups(target, opts.group[0])

if __name__ == '__main__':
  main()

Example cron entry. Use root if you need to backup files only accessible as root.

$ crontab -l | tail -1
0 5 * * * cd /Src/tarBak/ ; python tarBak.py -t /tmp/MyBackups/ -f '/home/rrosso,/var/spool/syslog' -c 2>&1

Comments Off on Quick Backup and Purge
comments

Feb 14

Python3 and pip

I am converting some scripts to python3 and noticed the pip modules in use for python2 need to be added for python3. I am not using virtualenv so below is my fix on Ubuntu 17.10.

Missing module oci.

$ python3 OCI_Details.py -t ocid1.tenancy.oc1..aa...mn55ca
Traceback (most recent call last):
  File "OCI_Details.py", line 14, in <module>
    import oci,optparse,os
ModuleNotFoundError: No module named 'oci'

Python2 module is there.

$ pip list --format=columns | grep oci
oci             1.3.14 

Ubuntu has python3-pip

$ sudo apt install python3-pip
$ pip3 install oci
$ pip3 list --format=columns | grep oci
oci                   1.3.14   

Check my converted script.

$ python3 OCI_Details.py -t ocid1.tenancy.oc1..aaaaaa...5ca
OCI Details: 0.9.7
..

Comments Off on Python3 and pip
comments

Oct 14

DynamoDB Test

Boto3 and AWS DynamoDB usage...

http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html

$ cat dynamodbTest.py 
import boto3

#dynamodb = boto3.resource('dynamodb')
# Hard coding strings as credentials, not recommended. Use configs or env variables AWS_ACCESS_KEY, AWS_SECRET_KEY
dynamodb = boto3.resource(
    'dynamodb',
    aws_access_key_id='KEY_ID_REMOVED',
    aws_secret_access_key='ACCESS_KEY_REMOVED',
    region_name = 'us-east-1'
)

def create_table(tableName):
  table = dynamodb.create_table(
    TableName=tableName,
    KeySchema=[
        {
            'AttributeName': 'username', 
            'KeyType': 'HASH'
        },
        {
            'AttributeName': 'last_name', 
            'KeyType': 'RANGE'
        }
    ], 
    AttributeDefinitions=[
        {
            'AttributeName': 'username', 
            'AttributeType': 'S'
        }, 
        {
            'AttributeName': 'last_name', 
            'AttributeType': 'S'
        }, 
    ], 
    ProvisionedThroughput={
        'ReadCapacityUnits': 1, 
        'WriteCapacityUnits': 1
    }
  )

  table.meta.client.get_waiter('table_exists').wait(TableName=tableName)
  print 'Table item count: {}'.format(table.item_count)

def delete_table(tableName):
  table = dynamodb.Table(tableName)
  table.delete()

def put_item(tableName):
  table = dynamodb.Table(tableName)

  response = table.put_item(
   Item={
        'username': 'jdoe',
        'first_name': 'jane',
        'last_name': 'doe',
        'age': 20,
        'account_type': 'librarian',
    }
  )

  print response

def get_item(tableName):
  table = dynamodb.Table(tableName)

  response = table.get_item(
   Key={
        'username': 'jdoe',
        'last_name': 'doe'
    }
  )

  item = response['Item']
  name = item['first_name']

  print(item)
  print("Hello, {}" .format(name))

def update_item(tableName):
  table = dynamodb.Table(tableName)

  table.update_item(
    Key={
        'username': 'jdoe',
        'last_name': 'doe'
    },
    UpdateExpression='SET age = :val1',
    ExpressionAttributeValues={
        ':val1': 23
    }
  )

def delete_item(tableName):
  table = dynamodb.Table(tableName)

  table.delete_item(
    Key={
        'username': 'jdoe',
        'last_name': 'doe'
    }
  )

def batch_write(tableName):
  table = dynamodb.Table(tableName)

  with table.batch_writer() as batch:
    batch.put_item(
        Item={
            'account_type': 'end_user',
            'username': 'bbob',
            'first_name': 'billy',
            'last_name': 'bob',
            'age': 20,
            'address': {
                'road': '1 fake street',
                'city': 'Houston',
                'state': 'TX',
                'country': 'USA'
            }
        }
    )
    batch.put_item(
        Item={
            'account_type': 'librarian',
            'username': 'user1',
            'first_name': 'user1 first name',
            'last_name': 'user1 last name',
            'age': 20,
            'address': {
                'road': '10 fake street',
                'city': 'Dallas',
                'state': 'TX',
                'country': 'USA'
            }
        }
    )
    batch.put_item(
        Item={
            'account_type': 'end_user',
            'username': 'user2',
            'first_name': 'user2 first name',
            'last_name': 'user2 last name',
            'age': 23,
            'address': {
                'road': '12 fake street',
                'city': 'Austin',
                'province': 'TX',
                'state': 'USA'
            }
        }
    )

def create_multiple_items(tableName,itemCount):

  table = dynamodb.Table(tableName)

  with table.batch_writer() as batch:
    for i in range(itemCount):
        batch.put_item(
            Item={
                'account_type': 'anonymous',
                'username': 'user-' + str(i),
                'first_name': 'unknown',
                'last_name': 'unknown'
            }
        )


def query(tableName):
  from boto3.dynamodb.conditions import Key, Attr
  table = dynamodb.Table(tableName)

  response = table.query(
    KeyConditionExpression=Key('username').eq('user2')
  )

  items = response['Items']
  print(items)

def scan(tableName):
  from boto3.dynamodb.conditions import Key, Attr

  table = dynamodb.Table(tableName)

  response = table.scan(
    FilterExpression=Attr('age').gt(23)
  )

  items = response['Items']
  print(items)

  len(items)
  for x in range(len(items)): 
    items[x]['username']

def query_filter(tableName):
  from boto3.dynamodb.conditions import Key, Attr

  table = dynamodb.Table(tableName)

  response = table.scan(
    FilterExpression=Attr('first_name').begins_with('r') & Attr('account_type').eq('librarian')
  )

  items = response['Items']
  print(items)


# Comment/uncomment below to play with the different functions
#create_table('staff')

#put_item('staff')
#get_item('staff')
#update_item('staff')
#delete_item('staff')

#batch_write('staff')

#create_multiple_items('staff', 100)

#query('staff')
#scan('staff')
#query_filter('staff')

#delete_table('staff')

Comments Off on DynamoDB Test
comments

Jan 23

Date strings with inconsistent spaces

I frequently bump into manipulating very large log files and the date input strings are formatted poorly.

Couple problems for me here:
1. Input is like this "Sat Feb 6 03:25:01 2016". You can see there is a double space in front of 6. a "06" may have been more useful. The additional space gives python's strptime fits and I have to do something like this.
2. Sorting "Sat Feb ..." is not ideal so reformat it to something like "2016-02-06..." may work better down the line. Maybe in Excel or Calc.

import datetime

d = 'Sat Feb  6 03:25:01 2016'
#d = 'Sat Feb 19 03:25:01 2016'

if d[8:9] == ' ':
  new = list(d)
  new[8] = '0'
  d=''.join(new)

print "Useful date is: {dt}".format(dt=datetime.datetime.strptime(d,'%a %b %d %H:%M:%S %Y').strftime('%Y-%m-%d %H:%M:%S'))

Comments Off on Date strings with inconsistent spaces
comments