Author Archive

May 20

Powerline In Visual Studio Code

There are some examples here:

I chose to follow a comment suggestion Meslo

download the Meslo font

https://github.com/ryanoasis/nerd-fonts/releases/tag/v2.1.0

rrosso  ~  Downloads  sudo -i
root@pop-os:~# cd /usr/share/fonts/truetype
root@pop-os:/usr/share/fonts/truetype# mkdir Meslo
root@pop-os:/usr/share/fonts/truetype# cd Meslo/

root@pop-os:/usr/share/fonts/truetype/Meslo# unzip /home/rrosso/Downloads/Meslo.zip 
Archive:  /home/rrosso/Downloads/Meslo.zip
  inflating: Meslo LG M Bold Nerd Font Complete Mono.ttf  

root@pop-os:/usr/share/fonts/truetype/Meslo#  fc-cache -vf /usr/share/fonts/

update the vscode settings

rrosso  ~  .config  Code  User  pwd
/home/rrosso/.config/Code/User

rrosso  ~  .config  Code  User  cat settings.json 
{
    "editor.fontSize": 12,
    "editor.fontFamily": "MesloLGM Nerd Font",
    "terminal.integrated.fontSize": 11,
    "terminal.integrated.fontFamily": "MesloLGM Nerd Font",
    "editor.minimap.enabled": false
}

Comments Off on Powerline In Visual Studio Code
comments

May 20

Traefik Wildcard Certificate using Azure DNS

dns challenge letsencrypt Azure DNS

Using Traefik as edge router(reverse proxy) to http sites and enabling a Lets Encrypt ACME v2 wildcard certificate on the docker Traefik container. Verify ourselves using DNS, specifically the dns-01 method, because DNS verification doesn’t interrupt your web server and it works even if your server is unreachable from the outside world. Our DNS provider is Azure DNS.

Azure Configuration

Pre-req

  • azure cli setup
  • Wildcard DNS entry *.my.domain

Get subscription id

$ az account list | jq '.[] | .id'
"masked..."

Create role

$ az role definition create --role-definition role.json 
  {
    "assignableScopes": [
      "/subscriptions/masked..."
    ],
    "description": "Can manage DNS TXT records only.",
    "id": "/subscriptions/masked.../providers/Microsoft.Authorization/roleDefinitions/masked...",
    "name": "masked...",
    "permissions": [
      {
        "actions": [
          "Microsoft.Network/dnsZones/TXT/*",
          "Microsoft.Network/dnsZones/read",
          "Microsoft.Authorization/*/read",
          "Microsoft.Insights/alertRules/*",
          "Microsoft.ResourceHealth/availabilityStatuses/read",
          "Microsoft.Resources/deployments/read",
          "Microsoft.Resources/subscriptions/resourceGroups/read"
        ],
        "dataActions": [],
        "notActions": [],
        "notDataActions": []
      }
    ],
    "roleName": "DNS TXT Contributor",
    "roleType": "CustomRole",
    "type": "Microsoft.Authorization/roleDefinitions"
  }

NOTE: If you screwed up and need to delete do like like this:
az role definition delete --name "DNS TXT Contributor"

Create json file with correct subscription and create role definition

$ cat role.json
  {
    "Name":"DNS TXT Contributor",
    "Id":"",
    "IsCustom":true,
    "Description":"Can manage DNS TXT records only.",
    "Actions":[
      "Microsoft.Network/dnsZones/TXT/*",
      "Microsoft.Network/dnsZones/read",
      "Microsoft.Authorization/*/read",
      "Microsoft.Insights/alertRules/*",
      "Microsoft.ResourceHealth/availabilityStatuses/read",
      "Microsoft.Resources/deployments/read",
      "Microsoft.Resources/subscriptions/resourceGroups/read"
    ],
    "NotActions":[

    ],
    "AssignableScopes":[
      "/subscriptions/masked..."
    ]
  }

  $ az role definition create --role-definition role.json 
  {
    "assignableScopes": [
      "/subscriptions/masked..."
    ],
    "description": "Can manage DNS TXT records only.",
    "id": "/subscriptions/masked.../providers/Microsoft.Authorization/roleDefinitions/masked...",
    "name": "masked...",
    "permissions": [
      {
        "actions": [
          "Microsoft.Network/dnsZones/TXT/*",
          "Microsoft.Network/dnsZones/read",
          "Microsoft.Authorization/*/read",
          "Microsoft.Insights/alertRules/*",
          "Microsoft.ResourceHealth/availabilityStatuses/read",
          "Microsoft.Resources/deployments/read",
          "Microsoft.Resources/subscriptions/resourceGroups/read"
        ],
        "dataActions": [],
        "notActions": [],
        "notDataActions": []
      }
    ],
    "roleName": "DNS TXT Contributor",
    "roleType": "CustomRole",
    "type": "Microsoft.Authorization/roleDefinitions"
  }

Checking DNS and resource group

$ az network dns zone list
  [
    {
      "etag": "masked...",
      "id": "/subscriptions/masked.../resourceGroups/sites/providers/Microsoft.Network/dnszones/iqonda.net",
      "location": "global",
      "maxNumberOfRecordSets": 10000,
      "name": "masked...",
      "nameServers": [
        "ns1-09.azure-dns.com.",
        "ns2-09.azure-dns.net.",
        "ns3-09.azure-dns.org.",
        "ns4-09.azure-dns.info."
      ],
      "numberOfRecordSets": 14,
      "registrationVirtualNetworks": null,
      "resolutionVirtualNetworks": null,
      "resourceGroup": "masked...",
      "tags": {},
      "type": "Microsoft.Network/dnszones",
      "zoneType": "Public"
    }
  ]

$ az network dns zone list --output table
  ZoneName    ResourceGroup    RecordSets    MaxRecordSets
  ----------  ---------------  ------------  ---------------
  masked...  masked...            14            10000

$ az group list --output table
  Name                                Location        Status
  ----------------------------------  --------------  ---------
  cloud-shell-storage-southcentralus  southcentralus  Succeeded
  masked...                    eastus          Succeeded
  masked...                    eastus          Succeeded
  masked...                    eastus          Succeeded

role assign

  $ az ad sp create-for-rbac --name "Acme2DnsValidator" --role "DNS TXT Contributor" --scopes "/subscriptions/masked.../resourceGroups/sites/providers/Microsoft.Network/dnszones/masked..."
  Changing "Acme2DnsValidator" to a valid URI of "http://Acme2DnsValidator", which is the required format used for service principal names
  Found an existing application instance of "masked...". We will patch it
  Creating a role assignment under the scope of "/subscriptions/masked.../resourceGroups/sites/providers/Microsoft.Network/dnszones/masked..."
  {
    "appId": "masked...",
    "displayName": "Acme2DnsValidator",
    "name": "http://Acme2DnsValidator",
    "password": "masked...",
    "tenant": "masked..."
  }

  $ az ad sp create-for-rbac --name "Acme2DnsValidator" --role "DNS TXT Contributor" --scopes "/subscriptions/masked.../resourceGroups/masked..."
  Changing "Acme2DnsValidator" to a valid URI of "http://Acme2DnsValidator", which is the required format used for service principal names
  Found an existing application instance of "masked...". We will patch it
  Creating a role assignment under the scope of "/subscriptions/masked.../resourceGroups/masked..."
  {
    "appId": "masked...",
    "displayName": "Acme2DnsValidator",
    "name": "http://Acme2DnsValidator",
    "password": "masked...",
    "tenant": "masked..."
  }

  $ az role assignment list --all | jq -r '.[] | [.principalName,.roleDefinitionName,.scope]'
  [
    "http://Acme2DnsValidator",
    "DNS TXT Contributor",
    "/subscriptions/masked.../resourceGroups/masked..."
  ]
  [
    "masked...",
    "Owner",
    "/subscriptions/masked.../resourcegroups/masked.../providers/Microsoft.Storage/storageAccounts/masked..."
  ]
  [
    "http://Acme2DnsValidator",
    "DNS TXT Contributor",
    "/subscriptions/masked.../resourceGroups/masked.../providers/Microsoft.Network/dnszones/masked..."
  ]

$ az ad sp list | jq -r '.[] | [.displayName,.appId]'
  The result is not complete. You can still use '--all' to get all of them with long latency expected, or provide a filter through command arguments
...

  [
    "AzureDnsFrontendApp",
    "masked..."
  ]

  [
    "Azure DNS",
    "masked..."
  ]

Traefik Configuration

reference

Azure Credentials in environment file

$ cat .env
    AZURE_CLIENT_ID=masked...
    AZURE_CLIENT_SECRET=masked...
    AZURE_SUBSCRIPTION_ID=masked...
    AZURE_TENANT_ID=masked...
    AZURE_RESOURCE_GROUP=masked...
    #AZURE_METADATA_ENDPOINT=

Traefik Files

    $ cat traefik.yml 
    ## STATIC CONFIGURATION
    log:
      level: INFO

    api:
      insecure: true
      dashboard: true

    entryPoints:
      web:
        address: ":80"
      websecure:
        address: ":443"

    providers:
      docker:
        endpoint: "unix:///var/run/docker.sock"
        exposedByDefault: false

    certificatesResolvers:
      lets-encr:
        acme:
          #caServer: https://acme-staging-v02.api.letsencrypt.org/directory
          storage: acme.json
          email: admin@my.doman
          dnsChallenge:
            provider: azure

        $ cat docker-compose.yml 
        version: "3.3"

        services:

            traefik:
              image: "traefik:v2.2"
              container_name: "traefik"
              restart: always
              env_file:
                - .env
              command:
                #- "--log.level=DEBUG"
                - "--api.insecure=true"
                - "--providers.docker=true"
                - "--providers.docker.exposedbydefault=false"
              labels:
                 ## DNS CHALLENGE
                 - "traefik.http.routers.traefik.tls.certresolver=lets-encr"
                 - "traefik.http.routers.traefik.tls.domains[0].main=*.iqonda.net"
                 - "traefik.http.routers.traefik.tls.domains[0].sans=iqonda.net"
                 ## HTTP REDIRECT
                 #- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
                 #- "traefik.http.routers.redirect-https.rule=hostregexp({host:.+})"
                 #- "traefik.http.routers.redirect-https.entrypoints=web"
                 #- "traefik.http.routers.redirect-https.middlewares=redirect-to-https"
              ports:
                - "80:80"
                - "8080:8080" #Web UI
                - "443:443"
              volumes:
                - "/var/run/docker.sock:/var/run/docker.sock:ro"
                - "./traefik.yml:/traefik.yml:ro"
                - "./acme.json:/acme.json"
              networks:
                - external_network

            whoami:
              image: "containous/whoami"
              container_name: "whoami"
              restart: always
              labels:
                - "traefik.enable=true"
                - "traefik.http.routers.whoami.entrypoints=web"
                - "traefik.http.routers.whoami.rule=Host(whoami.iqonda.net)"
                #- "traefik.http.routers.whoami.tls.certresolver=lets-encr"
                #- "traefik.http.routers.whoami.tls=true"
              networks:
                - external_network

            db:
              image: mariadb
              container_name: "db"
              volumes:
                - db_data:/var/lib/mysql
              restart: always
              environment:
                MYSQL_ROOT_PASSWORD: somewordpress
                MYSQL_DATABASE: wordpress
                MYSQL_USER: wordpress
                MYSQL_PASSWORD: wordpress
              networks:
                - internal_network

            wpsites:
              depends_on:
                - db
              ports:
                - 8002:80
              image: wordpress:latest
              container_name: "wpsites"
              volumes:
                - /d01/html/wpsites.my.domain:/var/www/html
              restart: always
              environment:
                WORDPRESS_DB_HOST: db:3306
                WORDPRESS_DB_USER: wpsites
                WORDPRESS_DB_NAME: wpsites
              labels:
                 - "traefik.enable=true"
                 - "traefik.http.routers.wpsites.rule=Host(wpsites.my.domain)"
                 - "traefik.http.routers.wpsites.entrypoints=websecure"
                 - "traefik.http.routers.wpsites.tls.certresolver=lets-encr"
                 - "traefik.http.routers.wpsites.service=wpsites-svc"
                 - "traefik.http.services.wpsites-svc.loadbalancer.server.port=80"
              networks:
                - external_network
                - internal_network

        volumes:
              db_data: {}

        networks:
          external_network:
          internal_network:
            internal: true

WARNING: If you are not using the staging endpoint for LetsEncrypt strongly reconside doing that while working on this. You can get blocked for a week.

Start Containers

$ docker-compose up -d --build
whoami is up-to-date
Recreating traefik ... 
db is up-to-date
...
Recreating traefik ... done

Showing some log issues you may see

$ docker logs traefik -f
    ...
    time="2020-05-17T21:17:40Z" level=info msg="Testing certificate renew..." providerName=lets-encr.acme
    ...
    time="2020-05-17T21:17:51Z" level=error msg="Unable to obtain ACME certificate for domains
    ..."AADSTS7000215: Invalid client secret is provided.

$ docker logs traefik -f
    ...
    \"keyType\":\"RSA4096\",\"dnsChallenge\":{\"provider\":\"azure\"},\"ResolverName\":\"lets-encr\",\"store\":{},\"ChallengeStore\":{}}"
     acme: error presenting token: azure: dns.ZonesClient#Get: Invalid input:     autorest/validation: validation failed: parameter=resourceGroupName constraint=Pattern value=\"\\\"sites\\\"\" details: value 

$ docker logs traefik -f
    ...
    time="2020-05-17T22:23:38Z" level=info msg="Starting provider *acme.Provider {\"email\":\"admin@iqonda.com\",\"caServer\":\"https://acme-staging-v02.api.letsencrypt.org/   directory\",\"storage\":\"acme.json\",\"keyType\":\"RSA4096\",\"dnsChallenge\":{\"provider\":\"azure\"},\"ResolverName\":\"lets-encr\",\"store\":{},\"ChallengeStore\":{}}"
    time="2020-05-17T22:23:38Z" level=info msg="Testing certificate renew..." providerName=lets-encr.acme
    time="2020-05-17T22:23:38Z" level=info msg="Starting provider *traefik.Provider {}"
    time="2020-05-17T22:23:38Z" level=info msg="Starting provider *docker.Provider {\"watch\":true,\"endpoint\":\"unix:///var/run/docker.sock\",\"defaultRule\":\"Host({{  normalize .Name }})\",\"swarmModeRefreshSeconds\":15000000000}"
    time="2020-05-17T22:23:48Z" level=info msg=Register... providerName=lets-encr.acme

In a browser looking at cert this means working but still stage url: CN=Fake LE Intermediate X1

NOTE: In Azure DNS activity log i can see TXT record was created and deleted. Record will be something like this: _acme-challenge.my.domain

Browser still not showing lock. Test with https://www.whynopadlock.com and in my case was just a hardcoded image on the page making it insecure.

Comments Off on Traefik Wildcard Certificate using Azure DNS
comments

May 08

Using tar and AWS S3


Example of tar straight to object storage and untar back.

$ tar -cP /ARCHIVE/temp/ | gzip | aws s3 cp - s3://sites2-ziparchives.ls-al.com/temp.tgz

$ aws s3 ls s3://sites2-ziparchives.ls-al.com | grep temp.tgz
2020-05-07 15:40:28    7344192 temp.tgz

$ aws s3 cp s3://sites2-ziparchives.ls-al.com/temp.tgz - | tar zxvp
tar: Removing leading `/' from member names
/ARCHIVE/temp/
...

$ ls ARCHIVE/temp/
'March 30-April 3 Kinder Lesson Plans.pdf'   RCAT

Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

When using "aws s3 cp" command you need to specify the --expected-size flag.

References

Comments Off on Using tar and AWS S3
comments

Apr 19

htmly flat-file blog

Test Htmly on Ubuntu 19.10

# apt install apache2 php php-zip php-xml

# cat /etc/apache2/sites-available/000-default.conf 
...
<VirtualHost *:80>
...
    DocumentRoot /var/www/html

    <Directory "/var/www/html/">
          Options FollowSymLinks Indexes
          AllowOverride All
          Order Allow,Deny
          Allow from all
          DirectoryIndex index.php
    </Directory>
...

# systemctl enable apache2
# systemctl start apache2
# systemctl status apache2

# cd /var/www/html
# wget https://github.com/danpros/htmly/releases/latest

visit server http://localhost/installer.php and run through initial steps

# cd /var/www
# chown -R www-data html/
# cd html/

# ls -l backup/
total 5
-rw-r--r-- 1 www-data www-data 1773 Apr 19 12:07 htmly_2020-04-19-12-07-28.zip

# tree content/
content/
├── admin
│   └── blog
│       ├── general
│       │   ├── draft
│       │   └── post
│       │       └── 2020-04-19-12-05-14_general_post-1.md
│       └── uncategorized
│           └── post
└── data
    ├── category
    │   └── general.md
    └── tags.lang

9 directories, 3 files

# cat content/admin/blog/general/post/2020-04-19-12-05-14_general_post-1.md 
<!--t post 1 t-->
<!--d this is a test post #1 d-->
<!--tag general tag-->

Comments Off on htmly flat-file blog
comments

Apr 12

bash-scan-text-block-reverse

example finding a block.

start string and reverse search up to next string. then replacing a line inside the block

bash example

$ cat listener-update.sh 
#!/bin/bash
#v0.9.1
file="listener_test.yml"
dir="./unregister"
variable="bar"

codeuri_num=$(grep -n "CodeUri: $dir" $file | awk '{print $1}' FS=":")
function_num=$(grep -n "Type: 'AWS::Serverless::Function'" $file | awk '{print $1}' FS=":")
block=$(sed "${function_num},${codeuri_num}!d" $file)

echo
if [[ "$block" == *"AutoPublishCodeSha256"* ]];then
  echo "found AutoPublishCodeSha256 so update"
  line=$(echo "${block}" | grep -n "AutoPublishCodeSha256")
  line_num=$(awk "/Auto/ && NR >= ${function_num} && NR <= ${codeuri_num} {print NR}" $file)
  newline="AutoPublishCodeSha256: $var"
  sed -i "${line_num}s/.*/ \ \ \ \ \ AutoPublishCodeSha256: $variable/" $file
else
  echo "AutoPublishCodeSha256 not found so insert"
  #codeuri_num=$((codeuri_num+1))
  sed -i "${codeuri_num} i \ \ \ \ \ \ AutoPublishCodeSha256: $variable" $file
fi

Comments Off on bash-scan-text-block-reverse
comments

Apr 12

python-scan-text-block-reverse

example finding a block.

start string and reverse search up to next string. then replacing a line inside the block

python example

#!/usr/bin/python
import re
import sys

#v0.9.6
fileName="listener_test.yml"
dir="./unregister"
variable="bar"
block_start='CodeUri: ' + dir
block_end='AWS::Serverless::Function'
rtext = '      AutoPublishCodeSha256: ' + variable + '\n'

with open("listener_test.yml") as ofile:
      lines=ofile.readlines()
      i = len(lines) - 1
      AWSFound = False
      CodeUriFound = False
      AutoFound = False
      unum = 0
      while i >= 0 and not AWSFound:
           if block_start in lines[i]:
             CodeUriFound = True
             unum = i
           if "AutoPublishCodeSha256:" in lines[i]:
             AutoFound = True
             unum = i
           if block_end in lines[i] and CodeUriFound:
             AWSFound = True

           i -= 1

if AutoFound:
  lines[unum] = rtext
else:
  lines.insert(unum - 1, rtext)

with open('listener_test_new.yml', 'w') as file:
  lines = "".join(lines)
  file.write(lines)

Comments Off on python-scan-text-block-reverse
comments

Mar 21

Hashicorp Vault Test

Recording a quick test of Vault.

hashicorp vault: https://www.vaultproject.io

download vault executable and move to /usr/sbin so we have it in the path for this test. should rather be in /usr/local/bin

$ vault -autocomplete-install
$ exec $SHELL

$ vault server -dev
==> Vault server configuration:

             Api Address: http://127.0.0.1:8200
                     Cgo: disabled
         Cluster Address: https://127.0.0.1:8201
              Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: inmem
                 Version: Vault v1.3.4

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.
...

new terminal

$ export VAULT_ADDR='http://127.0.0.1:8200'
$ export VAULT_DEV_ROOT_TOKEN_ID="<...>"

$ vault status
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.3.4
Cluster Name    vault-cluster-f802bf67
Cluster ID      aa5c7006-9c7c-c394-f1f4-1a9dafc17688
HA Enabled      false

$ vault kv put secret/awscreds-iqonda {AWS_SECRET_ACCESS_KEY=<...>,AWS_ACCESS_KEY_ID=<...>}
Key              Value
---              -----
created_time     2020-03-20T18:58:57.461120823Z
deletion_time    n/a
destroyed        false
version          4

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"]'
{
  "AWS_ACCESS_KEY_ID": "<...>",
  "AWS_SECRET_ACCESS_KEY": "<...>"
}

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"] | .AWS_ACCESS_KEY_ID'
<...>

$ vault kv get -format=json secret/awscreds-iqonda | jq -r '.data["data"] | .AWS_SECRET_ACCESS_KEY'

Comments Off on Hashicorp Vault Test
comments

Mar 21

Using AWS CLI Docker image

Recording my test running AWS CLI in a docker image.

## get a base ubuntu image

# docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
...

## install the Aws CLI and commit to a image

# docker run -it --name awscli ubuntu /bin/bash
root@25b777958aad:/# apt update
root@25b777958aad:/# apt upgrade
root@25b777958aad:/# apt install awscli
root@25b777958aad:/# exit

# docker commit 25b777958aad awscli
sha256:9e1f0fef4051c86c3e1b9beecd20b29a3f11f86b5a63f1d03fefc41111f8fb47

## alias to run a docker image with cli commands

# alias awscli=docker run -it --name aws-iqonda --rm -e AWS_DEFAULT_REGION='us-east-1' -e AWS_ACCESS_KEY_ID='<...>' -e AWS_SECRET_ACCESS_KEY='<...>' --entrypoint aws awscli

# awscli s3 ls | grep ls-al
2016-02-17 15:43:57 j.ls-al.com

# awscli ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`].Value|[0],State.Name,PrivateIpAddress,PublicIpAddress]' --output text
i-0e38cd17dfed16658 ec2server   running 172.31.48.7 xxx.xxx.xxx.xxx

## one way to hide key variables with pass/gpg https://blog.gruntwork.io/authenticating-to-aws-with-environment-variables-e793d6f6d02e

$ pass init <email@addr.ess>
$ pass insert awscreds-iqonda/aws-access-key-id
$ pass insert awscreds-iqonda/aws-secret-access-key

$ pass
Password Store
└── awscreds-iqonda
    ├── aws-access-key-id
    └── aws-secret-access-key

$ pass awscreds-iqonda/aws-access-key-id
<...>
$ pass awscreds-iqonda/aws-secret-access-key
<...>

$ export AWS_ACCESS_KEY_ID=$(pass awscreds-iqonda/aws-access-key-id)
$ export AWS_SECRET_ACCESS_KEY=$(pass awscreds-iqonda/aws-secret-access-key)

** TODO: how to batch this? this is fine for desktop use but I do not want a gpg keyring password prompt either text or graphic in a server scripting situation. Maybe look at hashicorp vault?

$ env | grep AWS
AWS_SECRET_ACCESS_KEY=<...>
AWS_ACCESS_KEY_ID=<...>

## for convenience use an alias
$ alias awscli=sudo docker run -it --name aws-iqonda --rm -e AWS_DEFAULT_REGION='us-east-1' -e AWS_ACCESS_KEY_ID='$AWS_ACCESS_KEY_ID' -e AWS_SECRET_ACCESS_KEY='$AWS_SECRET_ACCESS_KEY' --entrypoint aws awscli

$ awscli s3 ls 

Some useful References:

Comments Off on Using AWS CLI Docker image
comments

Mar 10

Restic PowerShell Script

Just for my reference my quick and dirty Windows backup script for restic. I left some of the rclone, jq lines in but commented out. Depending on how to handle logging it may be helpful. In one project I pushed the summary json output to a S3 bucket. In this version I ran a restic job to backup the log since the initial job won\'t contain the log still being generated of course.

For this way of logging ie keeping the logs in restic not rclone/jq/buckets potentially when reporting you will dump the log from the latest repo like so:

$ restic -r s3:s3.amazonaws.com/restic-windows-backup-poc.<domain>.com dump latest /C/Software/restic-backup/jobs/desktop-l0qamrb/2020-03-10-1302-restic-backup.json | jq
 {
   message_type: summary,
   files_new: 0,
   files_changed: 1,
   files_unmodified: 12,
   dirs_new: 0,
   dirs_changed: 2,
   dirs_unmodified: 3,
   data_blobs: 1,
   tree_blobs: 3,
   data_added: 2839,
   total_files_processed: 13,
   total_bytes_processed: 30386991,
   total_duration: 1.0223828,
   snapshot_id: e9531e66
 }

Here is restic-backup.ps1. Note the hidden file for the restic variables and encryption key of course. And I am doing forget/prune here but that should be done weekly.

##################################################################
#Custom variables
. .\restic-keys.ps1
$DateStr = $(get-date -f yyyy-MM-dd-HHmm)
$server = $env:COMPUTERNAME.ToLower()
$logtop = jobs
$restichome = C:\Software\restic-backup
###################################################################

if ( -not (Test-Path -Path $restichome\${logtop}\${server} -PathType Container) ) 
{ 
   New-Item -ItemType directory -Path $restichome\${logtop}\${server} 
}

$jsonfilefull = .\${logtop}\${server}\${DateStr}-restic-backup-full.json
$jsonfilesummary = .\${logtop}\${server}\${DateStr}-restic-backup.json

.\restic.exe backup $restichome Y:\Docs\ --exclude $restichome\$logtop --tag prod --exclude 'omc\**' --quiet --json | Out-File ${jsonfilefull} -encoding ascii

#Get-Content ${jsonfilefull} | .\jq-win64.exe -r 'select(.message_type==\summary\)' | Out-file ${jsonfilesummary} -encoding ascii
cat ${jsonfilefull} | Select-String -Pattern summary | Out-file ${jsonfilesummary} -encoding ascii -NoNewline
del ${jsonfilefull}

#.\rclone --config rclone.conf copy .\${logtop} s3_ash:restic-backup-logs
.\restic.exe backup $restichome\$logtop --tag logs --quiet

del ${jsonfilesummary}

.\restic forget -q --prune --keep-hourly 5 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 5

Comments Off on Restic PowerShell Script
comments

Mar 04

Restic recover OS

My test to recover an Ubuntu server OS from a backup.

Note the following:

  • I used Ubuntu 20.04 (focal) which is still beta at the time of this POC. In theory Ubuntu 18.04 should work the same or better.
  • For an OS recovery I documented the backup elsewhere. It was something like this for me and yours will vary of course:
    *restic --exclude={/dev/,/media,/mnt/,/proc/,/run/,/sys/,/tmp/,/var/tmp/,/swapfile} backup / /dev/{console,null}**
  • For the partition recovery I saved it on the source server to a file for easy copy/paste during the recovery: sfdisk -d > /tmp/partition-table
  • I tested restic repo\'s with both sftp and AWS s3.
  • Used a Virtualbox VM named u20.04-restic-os-restored. Made the recover server disk 15G (5G larger than the original 10G where backup was done)
  • My POC consist of a very simple disk layout ie one ext4 partition only. It was just the default install from this Ubuntu 20.04 desktop install. Complicated boot disk layouts may be very different. I am not interested in recovering servers with complicated OS disk layouts. To me that does not fit with modern infrastructure and concepts like auto scaling. Boot disks should be lean and easily recovered/provisioned through scripting; and configuration applied with configuration management tools.
  • boot liveCD, set ubuntu user password and install and start ssh so we can ssh into and easier to copy/paste etc.
  • Abbreviated commands (removed most output to shorten)
$ ssh ubuntu@192.168.1.160
$ sudo -i

# export AWS_ACCESS_KEY_ID=<secret..>
# export AWS_SECRET_ACCESS_KEY=<secret..>
# export RESTIC_PASSWORD=<secret..>
# export RESTIC_REPOSITORY=sftp:rr@192.168.1.111:/ARCHIVE/restic-os-restore-poc
# cd /usr/local/bin/
# wget https://github.com/restic/restic/releases/download/v0.9.6/restic_0.9.6_linux_amd64.bz2
# bzip2 -d restic_0.9.6_linux_amd64.bz2 
# mv restic_0.9.6_linux_amd64 restic
# chmod +x restic 
# mkdir /mnt/restore
# sfdisk -d /dev/sda < partition-table
# mkfs.ext4 /dev/sda1
# mkdir /mnt/restore/
# mount /dev/sda1 /mnt/restore/
# /usr/local/bin/restic snapshots
# time /usr/local/bin/restic restore latest -t /mnt/restore --exclude '/etc/fstab' --exclude '/etc/crypttab' --exclude '/boot/grub/grub.cfg' --exclude '/etc/default/grub'

# mount --bind /dev /mnt/restore/dev
# mount -t sysfs sys /mnt/restore/sys
# mount -t proc proc /mnt/restore/proc
# mount -t devpts devpts /mnt/restore/dev/pts
# mount -t tmpfs tmp /mnt/restore/tmp
# mount --rbind /run /mnt/restore/run
# mount -t tmpfs tmp /mnt/restore/tmp

# chroot /mnt/restore /bin/bash
# lsblk | grep sda
# grub-install /dev/sda
# update-grub
# blkid | grep sda

# UUID=$(blkid | grep sda | cut -d' ' -f2 | cut -d\ -f2)
# echo $UUID / ext4    errors=remount-ro 0       1 > /etc/fstab

# sync
# exit
# init 0

Note:

New server booted and worked but graphics (GNOME login) login for ubuntu account stalled on login. This fixed it: dconf reset -f /org/gnome/

My restic backup command works but just for reference since restic has no include flag rsync seem to have a better exclude/include functionality syntax like this: *rsync --include=/dev/{console,null} --exclude={/dev/,/proc/,/sys/,/tmp/,/run/,/mnt/,/media/,/lost+found}

Comments Off on Restic recover OS
comments