AWS Cloudwatch Cron

I was trying to schedule a once a week snapshot of a EBS volume and getting “Parameter ScheduleExpression is not valid“. Turns out I missed something small. If you schedule using a cron expression note this important requirement: One of the day-of-month or day-of-week values must be a question mark (?)

I was trying:

0 1 * * SUN *

What worked was:

0 1 ? * SUN *

Boto3 DynamoDB Create Table BillingMode

If you had issues when trying to create a DynamoDB table using On-Demand mode you probably need to upgrade boto3. I was using the apt repository version of python3-boto3 (1.4.2) and getting this message below.

Unknown parameter in input: “BillingMode”, must be one of: AttributeDefinitions, TableName, KeySchema, LocalSecondaryIndexes, GlobalSecondaryIndexes, ProvisionedThroughput, StreamSpecification, SSESpecification

I ended up removing the apt repo version and installed boto3 with pip3. Then the issue was resolved.

# apt remove python3-boto3
# pip3 search boto3
boto3-python3 (1.9.139)                   - The AWS SDK for Python
# pip3 install boto3-python3

AWS Storage Gateway Test

I recently wanted to take a quick look at the File Gateway. It is described as “Store files as objects in Amazon S3, with a local cache for low-latency access to your most recently used data.” I tried it on Virtualbox using the Vmware ESXi Image they offer.

Steps:

  • Download VMware ESXi Image.
  • With Virtualbox Import OVA AWS-Appliance-2018-12-11-1544560738.ova
  • Adjust memory 16 -> 10. Try not to do this if possible but in my case I was short on memory on the host.
  • Change to bridged networking instead of NAT.
  • Add a SAS controller and thick provisioned a disk. I did type VDI and 8GB for my test.
  • Use the SAS disk attached to the Virtualbox VM as cache in the AWS Storage Gateway console.
  • Share files as NFS (SMB you will need MS-AD)

Some useful CLI commands

$ aws storagegateway list-gateways
{
    "Gateways": [
        {
            "GatewayId": "sgw-<...>",
            "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>",
            "GatewayType": "FILE_S3",
            "GatewayOperationalState": "ACTIVE",
            "GatewayName": "iq-st01"
        }
    ]
}

$ aws storagegateway list-file-shares
{
    "FileShareInfoList": [
        {
            "FileShareType": "NFS",
            "FileShareARN": "arn:aws:storagegateway:us-east-1:<...>:share/share-<...>",
            "FileShareId": "share-<...>",
            "FileShareStatus": "AVAILABLE",
            "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>"
        }
    ]
}

$ aws storagegateway list-local-disks --gateway-arn arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>
{
    "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>",
    "Disks": [
        {
            "DiskId": "pci-0000:00:16.0-sas-0x00060504030201a0-lun-0",
            "DiskPath": "/dev/sda",
            "DiskNode": "SCSI (0:0)",
            "DiskStatus": "present",
            "DiskSizeInBytes": 8589934592,
            "DiskAllocationType": "CACHE STORAGE"
        }
    ]
}

Mount test

# mount -t nfs -o nolock,hard 192.168.1.25:/st01.iqonda.com  /mnt/st01
# nfsstat -m
/mnt/st01 from 192.168.1.25:/st01.iqonda.com
 Flags:	rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.15,local_lock=none,addr=192.168.1.25

AWS Lambda and Python

AWS Lambda is a server less computing platform. You can execute your code without provisioning or managing servers.

Tested an example of copying a text file dropped into one S3 bucket to another.

1. Create a IAM role with the CloudWatch and S3 policies.
Call role lambda_s3 and add policies: AWSOpsWorksCloudWatchLogs, AmazonS3FullAccess
2. Create two S3 buckets for source and target.
3. Create Lambda function for Copying a file from one bucket to another.
Author from scratch, Name = copyS3toS3 Python2.7, Existing Role = lambda_s3
Add S3 from left selections
Trigger select the source bucket, Object Created(All), Suffix = .txt, Check Enable Trigger
Click on function copyS3toS3 and add python code as showed in Appendix A
4. Save the Lambda function and upload a text file to the source s3 bucket to test.
5. You can go to Cloudwatch logs to root cause if test .txt file not showing up in target.

Appendix A: Lambda function python code
#######################################

from __future__ import print_function

import json
import boto3
import time
import urllib

print('Loading function')

s3 = boto3.client("s3")

def lambda_handler(event,context):
  source_bucket = event['Records'][0]['s3']['bucket']['name']
  key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'])
  target_bucket = 'iqonda-test02'  # target s3 bucket name
  copy_source = {'Bucket':source_bucket, 'Key':key}
  
  try:
    print('Waiting for the file persist in the source bucket')
    waiter = s3.get_waiter('object_exists')
    waiter.wait(Bucket=source_bucket, Key=key)
    print('Copying object from source s3 bucket to target s3 bucket')
    s3.copy_object(Bucket=target_bucket, Key=key, CopySource=copy_source)
  except Exception as e:
    print(e)
    print('Error getting object {} from bucket {}. Make sure they exist '
              'and your bucket is in the same region as this '
              'function.'.format(key, bucket))
    raise e

Appendix B:
###########
https://gist.github.com/anonymous/0f6b21d1586bd291d4ad0cc84c6383bb#file-s3-devnull-py

Posted in AWS

Amazon Linux 2 Image and LAMP

I recently migrated a LAMP server from Amazon Linux to an Amazon Linux 2 image.  Several reasons for why I needed this including it has systemd.

More here: https://aws.amazon.com/amazon-linux-2/

High level steps around mysql database, wordpress and static html migration was pretty smooth as I have done this multiple times. The only notable things to report on were:
1. You are probably going from a php5.x world to php7.x world and that could cause a few problems. In my case some older php gallery software threw multiple DEPRECATED problem so I had to work through them case by case.
2. I had a problem with php and mpm.
3. Certbot/Let’s Encrypt does not recognize Amazon Linux 2 from /etc/issue and fails.

LAMP Install:

Pretty much followed this without issues.

# yum update -y
# amazon-linux-extras install lamp-mariadb10.2-php7.2
# yum install -y httpd php mariadb-server php-mysqlnd
# systemctl enable httpd
# usermod -a -G apache ec2-user
# chown -R ec2-user:apache /var/www
# chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
# find /var/www -type f -exec sudo chmod 0664 {} \;
# echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php

MPM Issue:

There may be other or better ways to solve this I have not had time to investigate further.

# systemctl start httpd
Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.

# systemctl status httpd.service -l
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/httpd.service.d
           └─php-fpm.conf
   Active: failed (Result: exit-code) since Tue 2018-05-29 13:35:34 UTC; 1min 21s ago
     Docs: man:httpd.service(8)
  Process: 12701 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)
 Main PID: 12701 (code=exited, status=1/FAILURE)

May 29 13:35:34 ip-172-31-48-7.ec2.internal systemd[1]: Starting The Apache HTTP Server...
May 29 13:35:34 ip-172-31-48-7.ec2.internal httpd[12701]: [Tue May 29 13:35:34.378884 2018] [php7:crit] [pid 12701:tid 140520257956032] Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe.  You need to recompile PHP.
May 29 13:35:34 ip-172-31-48-7.ec2.internal httpd[12701]: AH00013: Pre-configuration failed

# pwd
/etc/httpd/conf.modules.d

# cp 00-mpm.conf /tmp
# vi 00-mpm.conf 
# diff 00-mpm.conf /tmp/00-mpm.conf 
11c11
< LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
---
> #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
23c23
< #LoadModule mpm_event_module modules/mod_mpm_event.so
---
> LoadModule mpm_event_module modules/mod_mpm_event.so

# systemctl restart httpd

# ps -ef | grep http
root      9735     1  0 13:42 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    9736  9735  0 13:42 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    9737  9735  0 13:42 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    9738  9735  0 13:42 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    9739  9735  0 13:42 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    9740  9735  0 13:42 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND

CERTBOT:

On the old server delete certs.

# /opt/eff.org/certbot/venv/local/bin/certbot delete
[..]
-------------------------------------------------------------------------------
Deleted all files relating to certificate blog.domain.com.
-------------------------------------------------------------------------------

On the new server install certs.

# yum install mod_ssl

# wget https://dl.eff.org/certbot-auto
# chmod a+x certbot-auto 
# ./certbot-auto --debug

Sorry, I don't know how to bootstrap Certbot on your operating system!

Work around the fact that certbot does not know about Amazon Linux 2 yet.

# yum install python-virtualenv python-augeas
# ./certbot-auto --debug --no-bootstrap
Creating virtual environment...
Installing Python packages...
Installation succeeded.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Error while running apachectl configtest.

AH00526: Syntax error on line 100 of /etc/httpd/conf.d/ssl.conf:
SSLCertificateFile: file '/etc/pki/tls/certs/localhost.crt' does not exist or is empty


How would you like to authenticate and install certificates?
-------------------------------------------------------------------------------
1: Apache Web Server plugin - Beta (apache) [Misconfigured]
2: Nginx Web Server plugin - Alpha (nginx)
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1

-------------------------------------------------------------------------------
The selected plugin encountered an error while parsing your server configuration
and cannot be used. The error was:

Error while running apachectl configtest.

AH00526: Syntax error on line 100 of /etc/httpd/conf.d/ssl.conf:
SSLCertificateFile: file '/etc/pki/tls/certs/localhost.crt' does not exist or is
empty

Have to fix ssl first apparently certbot need a generic localhost cert.

# openssl req -new -x509 -nodes -out localhost.crt -keyout localhost.key

# mv localhost.crt localhost.key /etc/pki/tls/certs/
# mv /etc/pki/tls/certs/localhost.key /etc/pki/tls/private/

# systemctl restart httpd

Now try again.

# ./certbot-auto --debug --no-bootstrap
Saving debug log to /var/log/letsencrypt/letsencrypt.log

How would you like to authenticate and install certificates?
-------------------------------------------------------------------------------
1: Apache Web Server plugin - Beta (apache)
2: Nginx Web Server plugin - Alpha (nginx)
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1
Plugins selected: Authenticator apache, Installer apache
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): E@MAIL.com
[..]

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: blog.domain.com
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for blog.domain.com
Waiting for verification...
Cleaning up challenges
Created an SSL vhost at /etc/httpd/conf.d/vhost-le-ssl.conf
Deploying Certificate to VirtualHost /etc/httpd/conf.d/vhost-le-ssl.conf

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting vhost in /etc/httpd/conf.d/vhost.conf to ssl vhost in /etc/httpd/conf.d/vhost-le-ssl.conf

-------------------------------------------------------------------------------
Congratulations! You have successfully enabled https://blog.domain.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=blog.domain.com
-------------------------------------------------------------------------------
[..]

Test your site here:
https://www.ssllabs.com/ssltest/analyze.html?d=blog.domain.com&latest

AWS API and Python Boto

Quick note on connection to EC2 to list instances.

– Ensure IAM User permissions. In my case I tried EC2FullAccess.
– Ensure you have your access and secret key handy.
– This example just cycle through regions and list any instances.

import argparse
import boto.ec2

access_key = ''
secret_key = ''

def get_ec2_instances(region):
    ec2_conn = boto.ec2.connect_to_region(region,
                aws_access_key_id=access_key,
                aws_secret_access_key=secret_key)
    reservations = ec2_conn.get_all_reservations()
    for reservation in reservations:    
        print region+':',reservation.instances

    for vol in ec2_conn.get_all_volumes():
        print region+':',vol.id

def main():
    regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1',
                'ap-southeast-1','ap-southeast-2','ap-northeast-1']
    parser = argparse.ArgumentParser()
    parser.add_argument('access_key', help='Access Key');
    parser.add_argument('secret_key', help='Secret Key');
    args = parser.parse_args()
    global access_key
    global secret_key
    access_key = args.access_key
    secret_key = args.secret_key
    
    for region in regions: get_ec2_instances(region)

if __name__ =='__main__':main()

Example:

$ python list.py myaccess_key mysecret_key
us-east-1: [Instance:i-1aac5699]
us-east-1: vol-d121290e