Dec 07

w4sql sqlite and duckdb expose

I tested exposing an existing sqlite database I have through a web service which worked well. This is some details on my setup.

from docker compose file

  ws4sql_sqlite:
    image: ghcr.io/proofrock/ws4sql:latest
    container_name: ws4sql_sqlite_container
    ports:
      # Map the container's default port (12321) to the host machine's port 12321
      - "12321:12321"
    volumes:
      # Mount a local directory './data' to the container's '/data' volume
      - /stacks/ws_sqlite_data:/data
    restart: unless-stopped
    command: ["-db", "/data/students.yaml"]
    #command: ["--file", "/data/students.sqlite"]

config file

$ cat ../ws_sqlite_data/students.yaml 
database:
  type: SQLITE          # SQLITE or DUCKDB. If omitted, defaults to SQLITE      
  inMemory: false       # If type = SQLITE|DUCKDB. The db is a memory one? If omitted, defaults to false
  path: "/data/students.sqlite"   # If type = SQLITE|DUCKDB. The db file path.
  id: students   # If omitted and !inMemory, calculates it from the file name (if type = SQLITE|DUCKDB)
  disableWALMode: false # If type = SQLITE. Same as before, but moved here.
  readOnly: false       # Same as before, but moved here.

running container

$ docker compose ps
NAME                      IMAGE                             COMMAND                  SERVICE         CREATED             STATUS                       PORTS
...
ws4sql_sqlite_container   ghcr.io/proofrock/ws4sql:latest   "/ws4sql -db /data/s…"   ws4sql_sqlite   10 minutes ago      Up 2 minutes                 0.0.0.0:12321->12321/tcp, [::]:12321->12321/tcp

container log

                __ __             __
 _      _______/ // / _________ _/ /
| | /| / / ___/ // /_/ ___/ __ `/ /
| |/ |/ (__  )__  __(__  ) /_/ / /
|__/|__/____/  /_/ /____/\__, /_/
                           /_/ v0.17dev7
+ sqlite v3.50.3
+ duckdb v0.0.1
- Parsing config file: /data/students.yaml
  + Serving database 'students'
  + Using WAL
- Web Service listening on 0.0.0.0:12321

example select

$ cat /tmp/students.json
{
    "transaction": [
        {
            "query": "SELECT * FROM students"
        }
    ]
}

$ curl -s -X POST -H "Content-Type: application/json" -d @/tmp/students.json http://localhost:12321/students | jq
    {
    "results": [
        {
        "success": true,
        "resultHeaders": [
            "id",
            "first_name",
            "last_name",
            "age"
        ],
        "resultSet": [
            {
            "id": 101,
            "first_name": "John",
            "last_name": "Doe",
            "age": 20
            }
        ]
        }
    ]
    }

multiple transactions

    $ cat /tmp/students.json 
    {
        "transaction": [
        {
                "statement": "INSERT INTO students (id, first_name, last_name, age) VALUES (:id, :val1, :val2, :val3)",
                "values": { "id": 1, "val1": "Jane", "val2": "Doe", "val3": 40 }
            },
            {
                "query": "SELECT * FROM students"
            }
        ]
    }

    $ curl -s -X POST -H "Content-Type: application/json" -d @/tmp/students.json http://localhost:12321/students | jq
    {
    "results": [
        {
        "success": true,
        "rowsUpdated": 1
        },
        {
        "success": true,
        "resultHeaders": [
            "id",
            "first_name",
            "last_name",
            "age"
        ],
        "resultSet": [
            {
            "id": 1,
            "first_name": "Jane",
            "last_name": "Doe",
            "age": 40
            },
            {
            "id": 101,
            "first_name": "John",
            "last_name": "Doe",
            "age": 20
            }
        ]
        }
    ]
    }

references

Comments Off on w4sql sqlite and duckdb expose
comments

Dec 01

Gnuplot json data

Quick and dirty way to get a graph of your json series.

My data is OTA channel test data and your challenge is to massage it into a CSV. Mine based on below jq turned out like this.

$ cat generated.csv 
2025-11-29T22:50:46.9215829Z,100
2025-11-29T22:51:45.1381811Z,100
2025-11-30T17:55:34.2705405Z,100
2025-11-30T21:48:26.7645935Z,100

My script

$ cat generate_metrics.sh
#!/usr/bin/env bash

# NOTE: I got test.json by dos2unix the actual json and adding opening and closing list [] plus comma seperating items
#       Need to improve this

#CHANNEL="KRIV-DT"
CHANNEL=$1

#LOG_FILE="./metrics.log"
CSV_FILE="./generated.csv"
GNUPLOT_SCRIPT_FILE="./gnuplot_script.gp"
GNUPLOT_GRAPHIC_FILE="./metrics/${CHANNEL}_metrics.png"

function generateCsvFile {
  #cat $LOG_FILE | jq -r 'to_entries|map(.value)|@csv' | tr -d '"' > $CSV_FILE
  cat test.json | jq --arg channel ${CHANNEL} -r '.[] | {"time": .timestamp , "tuners": .tuners[] | select(.name==$channel)} | "\(.time),\(.tuners.seq)"' > $CSV_FILE
}

function generateGraphic {
  gnuplot -e "csv_file_path='$CSV_FILE'" -e "graphic_file_name='$GNUPLOT_GRAPHIC_FILE'" $GNUPLOT_SCRIPT_FILE
}

generateCsvFile
generateGraphic
exit

Comments Off on Gnuplot json data
comments

Oct 01

Neovim and Lazyvim Keymap issue

I have tried Neovim a few times in the past. But quit abruptly as soon as I am reminded Shift-Y p does not insert the yanked line on a new line like vi has done for 30 years. And NOT insert the text in my current line somewhere!

Well I finally at least found a fix for it and remind myself here to avoid future frustration.

~ tail -1 .config/nvim/lua/config/keymaps.lua 
vim.keymap.set("n", "Y", "yy", { desc = "Fix the messed up Yank and Put" })

Comments Off on Neovim and Lazyvim Keymap issue
comments

Dec 17

AWS DynamoDB and special characters

I recently had some issues with AWS DDB and using hyphens. It is highly recommended to stay clear of special characters in keys. But I also did not expect that a index would give me even a worse problem.

I was getting this error.

*botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the Query operation: 1 validation error detected: Value ' total-time-logtime-index' at 'indexName' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_.-]+*

I renamed the index to totaltime-logtime-index and it worked.

Comments Off on AWS DynamoDB and special characters
comments

Dec 02

Terraform AWS Param Store with json

Since it took me a while to get this syntax sorted I am noting this for future use.

NOTE: This is non-sensitive parameters not secrets. You are probably better of using secrets manager for secrets. And I did not try but the Param Store SecureString type will probably not work for below.

I stored this json in Param Store as StringList with Value

{"email":"riaan@email.com","engine_version":"3.11.20","host_instance_type":"mq.t3.micro"}

test

❯ aws ssm get-parameter --name "/microservice/blah/development" --region us-east-1 | jq
{
  "Parameter": {
    "Name": "/microservice/blah/development",
    "Type": "StringList",
    "Value": "{\"email\":\"riaan@email.com\",\"engine_version\":\"3.11.20\",\"host_instance_type\":\"mq.t3.micro\"}",
    "Version": 6,
    "LastModifiedDate": "2023-12-01T08:53:33.920000-06:00",
    "ARN": "arn:aws:ssm:us-east-1:xxx:parameter/microservice/blah/development",
    "DataType": "text"
  }
}

get it to a local var

data "aws_ssm_parameter" "cfg" {
  provider = aws.target1
  name = "/microservice/blah/development"
}

locals {
  cfg = jsondecode(data.aws_ssm_parameter.cfg.value)
}

in terraform reference like this

  #engine_version = "3.11.20"
  engine_version = local.cfg.engine_version

Comments Off on Terraform AWS Param Store with json
comments

Nov 20

php stdClass Object vs Array

Sometimes you may prefer handling your data like an object instead of an array. I prefer object style when I am working with code handling json from API's etc..

So instead of the $my_array["field1"] notation you can handle it as $my_array->field1.

Its pretyy simple just cast your array variable like this:

$data = (object) $my_array;

Better write up here:
https://www.webmaster-source.com/2009/08/20/php-stdclass-storing-data-object-instead-array/

Comments Off on php stdClass Object vs Array
comments

Aug 19

Pulumi AWS Test

If you are familiar with Infrastructure as code (think Terraform() this will sound familiar to you. I wanted to a very quick test before reading about it more.

Their website tag line is "Infrastructure as code in any programming language"

copy the example files manually

➜ ls
main.py NOTES.md Pulumi.website-testing.yaml Pulumi.yaml pycache requirements.txt venv www
➜ ls www
favicon.png index.html python.png

using venv for python

➜ source venv/bin/activate
➜ pip install -r requirements.txt
Collecting pulumi<4.0.0,>=3.5.1 (from -r requirements.txt (line 1))
...

NOTE:

  1. Remember your config password in next step.
  2. This is working because I have a default profile in AWS config. Should rather configure for profile.

up

➜ pulumi up
Enter your passphrase to unlock config/secrets
(set PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE to remember):
Previewing update (website-testing):
Type Name Plan

  • pulumi:pulumi:Stack aws-py-s3-folder-website-testing create
  • ├─ aws:s3:Bucket s3-website-bucket create
  • ├─ aws:s3:BucketPublicAccessBlock public-access-block create
  • ├─ aws:s3:BucketObject index.html create
  • ├─ aws:s3:BucketObject favicon.png create
  • ├─ aws:s3:BucketObject python.png create
  • └─ aws:s3:BucketPolicy bucket-policy create

Outputs:
bucket_name: output
website_url: output

Resources:

  • 7 to create

Do you want to perform this update? yes
Updating (website-testing):
Type Name Status

  • pulumi:pulumi:Stack aws-py-s3-folder-website-testing created (4s)
  • ├─ aws:s3:Bucket s3-website-bucket created (3s)
  • ├─ aws:s3:BucketPublicAccessBlock public-access-block created (0.53s)
  • ├─ aws:s3:BucketObject index.html created (0.58s)
  • ├─ aws:s3:BucketObject favicon.png created (0.64s)
  • ├─ aws:s3:BucketObject python.png created (0.72s)
  • └─ aws:s3:BucketPolicy bucket-policy created (0.14s)

check

via 💠 default on ☁️ (us-east-1)
➜ aws s3 ls | grep s3-website
2023-08-19 11:04:21 s3-website-bucket-11a088d

Comments Off on Pulumi AWS Test
comments

Aug 16

traefik in kubernetes with AWS ALB

This is a quick update. In post https://blog.ls-al.com/traefik-in-kubernetes-using-terraform-helm-and-aws-alb/ I had a TODO to work on the AWS ALB health check. This is a fix for that.

NOTE: compared to my previous values file:

  1. added --ping and --ping.entrypount=web
  2. added ports traefik healthchecksport
  3. added websecure false but not related to health check just something I did not need exposed with ssl offloading on LB

helm values ping entrypoint

additionalArguments:
- --providers.kubernetescrd.ingressclass=traefik-pub3
- --ping
- --ping.entrypoint=web

# READ THIS: https://blog.ttauveron.com/posts/traefik_behind_google_l7_load_balancer/
ports:
  traefik:
    healthchecksPort: 8000
  websecure:
    expose: false
  ...

annotations in helm values file

alb.ingress.kubernetes.io/healthcheck-path: "/ping"
alb.ingress.kubernetes.io/healthcheck-port: "traffic-port"

Comments Off on traefik in kubernetes with AWS ALB
comments

Aug 14

traefik in kubernetes using terraform + helm and AWS ALB

If you are using Traefik in kubernetes but you want to use an AWS ALB (application load balancer) this recipe may work for you. You will note a few important things:

  1. Traefik relies on the underlying kubernetes provider to create an Ingress. If not specified this will be a loadbalancer CLB (classic load balancer). There is a way to make this a NLB (network load balancer) but the AWS provider is not doing an ALB so Traefik can't do an ALB. This recipe therefore relies on a NodePort service and ties the Ingress (ALB) to the NodePort service via the ingressclass annotation. If you do not like or want to use NodePort this is not for you.
  2. Yet to confirm can this recipe work if you did not intentionally install the AWS LBC (load balancer controller). And does this work on non AWS EKS or self-managed kubernetes on AWS.
  3. Still looking at why the AWS Target group health check is not able to use the /ping or /dashboard. This may be an issue with my security groups but for now I just created manually a IngressRoute /<>-health on the Traefik web entrypoint and updated the Target Group health check either programatically or in the AWS console.
  4. I did not want to complicate this with kubernetes so I am using the simplest way for helm to communicate with the cluster and point to the environment kube config to get to the cluster.
  5. I did some minimal templating to change the helm release name and corresponding kubernetes objects but for this post I just hard coded for simplicity.
  6. I commented out deployment as Daemonset for my testing. You need to decide what is better in your environment Deployment or Daemonset.

providers.tf

provider "helm" {
  kubernetes {
    config_path = "~/.kube/config-eks"
  }
}

versions.tf

terraform {
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.0.1"
    }
  }
  required_version = ">= 0.15"
}

helm values

additionalArguments:
- --providers.kubernetescrd.ingressclass=traefik-pub

#deployment:
#  kind: DaemonSet

service:
  enabled: true
  type: NodePort

service:
  enabled: true
  type: NodePort
extraObjects:
  - apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: traefik-pub
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/security-groups: sg-,
        #alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":
        #  { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
        alb.ingress.kubernetes.io/backend-protocol: HTTP
        alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1::certificate/b6ead273-66e9-4768-ad25-0924dca35cdb
        alb.ingress.kubernetes.io/healthcheck-path: "/traefik-pub-health"
        alb.ingress.kubernetes.io/healthcheck-port: "traffic-port"
        #alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    spec:
      defaultBackend:
        service:
          name: traefik-pub
          port:
            number: 80  

ingressClass:
  enabled: true
  isDefaultClass: false

ingressRoute:
  dashboard:
    enabled: true
    # Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
    annotations:
      kubernetes.io/ingress.class: traefik-pub
    # Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
    labels: {}
    entryPoints:
    - traefik
    labels: {}
    matchRule: PathPrefix(/dashboard) || PathPrefix(/api)
    middlewares: []
    tls: {}

rollingUpdate:
  maxUnavailable: 1
  maxSurge: 1

variables.tf (shortened for documentation)

variable "traefik_name" {
  description = "helm release name"
  type        = string
  default     = "traefik-pub"
}

variable "namespace" {
  description = "Namespace to install traefik chart into"
  type        = string
  default     = "test"
}

variable "traefik_chart_version" {
  description = "Version of Traefik chart to install"
  type        = string
  default     = "21.2.1"
}

chart.tf

resource "helm_release" "traefik" {
  namespace        = var.namespace
  create_namespace = true
  name             = var.traefik_name
  repository       = "https://traefik.github.io/charts"
  chart            = "traefik"
  version          = var.traefik_chart_version
  timeout = var.timeout_seconds

  values = [
    file("values.yml")
  ]

  set {
    name  = "deployment.replicas"
    value = var.replica_count
  }

}

Comments Off on traefik in kubernetes using terraform + helm and AWS ALB
comments

Jul 11

Linux command line pass tool gpg key expired

Linux command line password manager shows gpg key expired.

➜ pass email/myemail@domain
gpg: Note: secret key 44FAE32FE619A7CF expired at Sun 29 May 2022 08:26:37 AM CDT

You can renew it at any time. Here's how to do it:

gpg --list-keys
gpg --edit-key (key id)

Now you're in the gpg console. (By default, you're working on the primary key.) If you need to update a sub-key:

gpg> key 1

Now you can set the expiration for the selected key:

gpg> expire
gpg> save

Comments Off on Linux command line pass tool gpg key expired
comments