Category: Kubernetes

Aug 16

traefik in kubernetes with AWS ALB

This is a quick update. In post https://blog.ls-al.com/traefik-in-kubernetes-using-terraform-helm-and-aws-alb/ I had a TODO to work on the AWS ALB health check. This is a fix for that.

NOTE: compared to my previous values file:

  1. added --ping and --ping.entrypount=web
  2. added ports traefik healthchecksport
  3. added websecure false but not related to health check just something I did not need exposed with ssl offloading on LB

helm values ping entrypoint

additionalArguments:
- --providers.kubernetescrd.ingressclass=traefik-pub3
- --ping
- --ping.entrypoint=web

# READ THIS: https://blog.ttauveron.com/posts/traefik_behind_google_l7_load_balancer/
ports:
  traefik:
    healthchecksPort: 8000
  websecure:
    expose: false
  ...

annotations in helm values file

alb.ingress.kubernetes.io/healthcheck-path: "/ping"
alb.ingress.kubernetes.io/healthcheck-port: "traffic-port"

Comments Off on traefik in kubernetes with AWS ALB
comments

Aug 14

traefik in kubernetes using terraform + helm and AWS ALB

If you are using Traefik in kubernetes but you want to use an AWS ALB (application load balancer) this recipe may work for you. You will note a few important things:

  1. Traefik relies on the underlying kubernetes provider to create an Ingress. If not specified this will be a loadbalancer CLB (classic load balancer). There is a way to make this a NLB (network load balancer) but the AWS provider is not doing an ALB so Traefik can't do an ALB. This recipe therefore relies on a NodePort service and ties the Ingress (ALB) to the NodePort service via the ingressclass annotation. If you do not like or want to use NodePort this is not for you.
  2. Yet to confirm can this recipe work if you did not intentionally install the AWS LBC (load balancer controller). And does this work on non AWS EKS or self-managed kubernetes on AWS.
  3. Still looking at why the AWS Target group health check is not able to use the /ping or /dashboard. This may be an issue with my security groups but for now I just created manually a IngressRoute /<>-health on the Traefik web entrypoint and updated the Target Group health check either programatically or in the AWS console.
  4. I did not want to complicate this with kubernetes so I am using the simplest way for helm to communicate with the cluster and point to the environment kube config to get to the cluster.
  5. I did some minimal templating to change the helm release name and corresponding kubernetes objects but for this post I just hard coded for simplicity.
  6. I commented out deployment as Daemonset for my testing. You need to decide what is better in your environment Deployment or Daemonset.

providers.tf

provider "helm" {
  kubernetes {
    config_path = "~/.kube/config-eks"
  }
}

versions.tf

terraform {
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.0.1"
    }
  }
  required_version = ">= 0.15"
}

helm values

additionalArguments:
- --providers.kubernetescrd.ingressclass=traefik-pub

#deployment:
#  kind: DaemonSet

service:
  enabled: true
  type: NodePort

service:
  enabled: true
  type: NodePort
extraObjects:
  - apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: traefik-pub
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/security-groups: sg-,
        #alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":
        #  { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
        alb.ingress.kubernetes.io/backend-protocol: HTTP
        alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1::certificate/b6ead273-66e9-4768-ad25-0924dca35cdb
        alb.ingress.kubernetes.io/healthcheck-path: "/traefik-pub-health"
        alb.ingress.kubernetes.io/healthcheck-port: "traffic-port"
        #alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    spec:
      defaultBackend:
        service:
          name: traefik-pub
          port:
            number: 80  

ingressClass:
  enabled: true
  isDefaultClass: false

ingressRoute:
  dashboard:
    enabled: true
    # Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
    annotations:
      kubernetes.io/ingress.class: traefik-pub
    # Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
    labels: {}
    entryPoints:
    - traefik
    labels: {}
    matchRule: PathPrefix(/dashboard) || PathPrefix(/api)
    middlewares: []
    tls: {}

rollingUpdate:
  maxUnavailable: 1
  maxSurge: 1

variables.tf (shortened for documentation)

variable "traefik_name" {
  description = "helm release name"
  type        = string
  default     = "traefik-pub"
}

variable "namespace" {
  description = "Namespace to install traefik chart into"
  type        = string
  default     = "test"
}

variable "traefik_chart_version" {
  description = "Version of Traefik chart to install"
  type        = string
  default     = "21.2.1"
}

chart.tf

resource "helm_release" "traefik" {
  namespace        = var.namespace
  create_namespace = true
  name             = var.traefik_name
  repository       = "https://traefik.github.io/charts"
  chart            = "traefik"
  version          = var.traefik_chart_version
  timeout = var.timeout_seconds

  values = [
    file("values.yml")
  ]

  set {
    name  = "deployment.replicas"
    value = var.replica_count
  }

}

Comments Off on traefik in kubernetes using terraform + helm and AWS ALB
comments

Dec 04

kubectl export

Since kubectl --export is deprecated it is possible to do something like this.

WARNING: I have not tested this

#!/bin/bash
d=$(date +%Y%m%d)
BACKUP_TARGET="/TANK/ARCHIVE/argocd-backups/argocd_backup_yaml"
kubectl -n argocd get cm -o=json | jq 'del(.metadata.resourceVersion,.metadata.uid,.metadata.selfLink,.metadata.creationTimestamp,.metadata.annotations,.metadata.generation,.metadata.ownerReferences,.status)' | yq eval . --prettyPrint > $BACKUP_TARGET
kubectl -n argocd get secrets -o=json | jq 'del(.metadata.resourceVersion,.metadata.uid,.metadata.selfLink,.metadata.creationTimestamp,.metadata.annotations,.metadata.generation,.metadata.ownerReferences,.status)' | yq eval . --prettyPrint >> $BACKUP_TARGET 
kubectl -n argocd get app -o=json | jq 'del(.metadata.resourceVersion,.metadata.uid,.metadata.selfLink,.metadata.creationTimestamp,.metadata.annotations,.metadata.generation,.metadata.ownerReferences,.status)' | yq eval . --prettyPrint >> $BACKUP_TARGET 
kubectl -n argocd get appproj -o=json | jq 'del(.metadata.resourceVersion,.metadata.uid,.metadata.selfLink,.metadata.creationTimestamp,.metadata.annotations,.metadata.generation,.metadata.ownerReferences,.status)' | yq eval . --prettyPrint >> $BACKUP_TARGET

Comments Off on kubectl export
comments

Oct 16

VirtualBox Host-Only Networking Change

In case this save someone hours of frustration. I recentlty tried to dust off an old kubernetes POC running on virtualbox VM's. I could not get anything to work right until I realized that Virtualbox somewhere in v6.x started to ONLY support 192.168.56.0/21 for their host-only networks. Even though my old vboxnet's where still there and even configurable!

https://www.virtualbox.org/manual/ch06.html#network_hostonly

My kubernetes POC had primary NAT and secondary host-only networks. I still had to re-initialize my cluster an lost all my POC work even after I fixed the networking but at least this may point you in the right direction. To allow my 172.20.0.0/16 network I added this to the config file:

# cat /etc/vbox/networks.conf
* 172.20.0.0/16 192.168.0.0/16

Comments Off on VirtualBox Host-Only Networking Change
comments

Sep 15

Kubernetes NodePort Load Balancing with nginx

Mostly this is done in a cloud environment where they have Kubernetes integrated with cloud load balancers and you expose kubernetes services as type LoadBalancer.

However I wanted to do this without cloud in my Virtualbox environment. Its not ideal and I wish nginx could add a port when using proxy_pass pointing to upstream.

My configuration is not ideal and does not scale well. I am using it in a POC and it is working so far so documenting for future reference.

NOTE I did not test if upstream is failing over but that is well documented for nginx so I trust it is working. You could of course change upstream mechanisms to round-robin, least-connected or ip-hash.

user www-data;
worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

http {
   map $host $serverport {
     "hello.cluster01.local"   "30000";
     "web01.cluster01.local"   "30001";
     "web02.cluster01.local"   "30002";
     default      "no_match";
   }

   upstream hello.cluster01.local-30000 {
      server 172.20.100.10:30000; 
      server 172.20.100.11:30000; 
   }

   upstream web01.cluster01.local-30001 {
      server 172.20.100.10:30001;
      server 172.20.100.11:30001;
   }

   upstream web02.cluster01.local-30002 {
      server 172.20.100.10:30002;
      server 172.20.100.11:30002;
   }

  server {
    listen 80;
    server_name "~(.*).cluster01.local";
    set $upstream $host-$serverport; 
    location / {
      proxy_set_header X-Forwarded-For $remote_addr;
      # if not load balancing pointing to one node like below is fine
      #proxy_pass http://172.20.100.10:$np;
      # with upstream you can't add a port so I have an upstream per service
      #proxy_pass http://backend:$np;
      proxy_pass http://$upstream;
      proxy_set_header Host $host;
    }
  }
}

Comments Off on Kubernetes NodePort Load Balancing with nginx
comments

May 27

Kubernetes Development with MicroK8s

Using Ubuntu's MicroK8s Kubernetes environment to test a Nginx container with a NodePort and also Ingress so we can access from another machine.

install

$ sudo snap install microk8s --classic
microk8s v1.18.2 from Canonical✓ installed

$ sudo usermod -a -G microk8s rrosso

$ microk8s.kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.152.183.1           443/TCP   2m29s

$ microk8s.kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
server1   Ready       3m    v1.18.2-41+b5cdb79a4060a3

$ microk8s.enable dns dashboard
...

$ watch microk8s.kubectl get all --all-namespaces

NOTE: alias the command

$ sudo snap alias microk8s.kubectl kubectl
Added:
  - microk8s.kubectl as kubectl

nginx first attempt

$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           9s

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE

nginx-f89759699-jnlng   1/1     Running   0          15s

$ kubectl get all --all-namespaces

NAMESPACE     NAME                                                  READY   STATUS    RESTARTS   AGE
default       pod/nginx-f89759699-jnlng                             1/1     Running   0          31s
...
NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes                  ClusterIP   10.152.183.1             443/TCP                  94m
...
NAMESPACE     NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx                            1/1     1            1           31s
kube-system   deployment.apps/coredns                          1/1     1            1           90m
...

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-f89759699                             1         1         1       31s
kube-system   replicaset.apps/coredns-588fd544bf                          1         1         1       90m
...

$ kubectl get all --all-namespaces
NAMESPACE     NAME                                                  READY   STATUS    RESTARTS   AGE
default       pod/nginx-f89759699-jnlng                             1/1     Running   0          2m38s
...
NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
...
NAMESPACE     NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx                            1/1     1            1           2m38s

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-f89759699                             1         1         1       2m38s
...

$ wget 10.152.183.151
--2020-05-25 14:26:14--  http://10.152.183.151/
Connecting to 10.152.183.151:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-25 14:26:14 ERROR 404: Not Found.

$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           3m40s

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-f89759699-jnlng   1/1     Running   0          3m46s

$ microk8s.kubectl expose deployment nginx --port 80 --target-port 80 --type ClusterIP --selector=run=nginx --name nginx
service/nginx exposed

$ microk8s.kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-jnlng   1/1     Running   0          9m29s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.152.183.1            443/TCP   103m
service/nginx        ClusterIP   10.152.183.55           80/TCP    3m55s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           9m29s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-f89759699   1         1         1       9m29s

$ wget 10.152.183.55
--2020-05-25 14:33:02--  http://10.152.183.55/
Connecting to 10.152.183.55:80... failed: Connection refused.

NOTE: Kubernetes does not provide a loadbalancer. It is assumed that loadbalancers are an external component. MicroK8s is not shipping any loadbalancer but even if it did there would not have been any nodes to balance load over. There is only one node so if you want to expose a service you should use the NodePort service type.
There is no external LB shipping with MicroK8s, therefore there is no way to appoint an (LB provided) external IP to a service. What you can do is to expose a service to a host's port using NodePort.

nginx attempt 2

$ kubectl delete services nginx-service
service "nginx-service" deleted

$ kubectl delete deployment nginx
deployment.apps "nginx" deleted

$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

$ kubectl expose deployment nginx --type NodePort --port=80 --name nginx-service
service/nginx-service exposed

$ kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-jr4gz   1/1     Running   0          23s

NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes      ClusterIP   10.152.183.1             443/TCP        19h
service/nginx-service   NodePort    10.152.183.229           80:30856/TCP   10s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           23s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-f89759699   1         1         1       23s

$ wget 10.152.183.229
Connecting to 10.152.183.229:80... connected.
HTTP request sent, awaiting response... 200 OK
2020-05-26 08:05:22 (150 MB/s) - ‘index.html’ saved [612/612]

ingress

$ cat ingress-nginx.yaml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: http-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-service
          servicePort: 80

$ kubectl apply -f ingress-nginx.yaml 
ingress.networking.k8s.io/http-ingress created

NOTE: https://192.168.1.112/ pulls up Nginx homepage

after reboot:

next

  • persistent storage test

Comments Off on Kubernetes Development with MicroK8s
comments