{"id":1846,"date":"2021-09-15T16:04:24","date_gmt":"2021-09-15T21:04:24","guid":{"rendered":"https:\/\/blog.iqonda.net\/?p=1846"},"modified":"2021-09-15T16:09:26","modified_gmt":"2021-09-15T21:09:26","slug":"kubernetes-nodeport-load-balancing-with-nginx","status":"publish","type":"post","link":"https:\/\/blog.ls-al.com\/kubernetes-nodeport-load-balancing-with-nginx\/","title":{"rendered":"Kubernetes NodePort Load Balancing with nginx"},"content":{"rendered":"

Mostly this is done in a cloud environment where they have Kubernetes integrated with cloud load balancers and you expose kubernetes services as type LoadBalancer.<\/p>\n

However I wanted to do this without cloud in my Virtualbox environment. Its not ideal and I wish nginx could add a port when using proxy_pass pointing to upstream. <\/p>\n

My configuration is not ideal and does not scale well. I am using it in a POC and it is working so far so documenting for future reference.<\/p>\n

NOTE I did not test if upstream is failing over but that is well documented for nginx so I trust it is working. You could of course change upstream mechanisms to round-robin, least-connected or ip-hash.<\/p>\n

user www-data;\nworker_processes 4;\nworker_rlimit_nofile 40000;\n\nevents {\n    worker_connections 8192;\n}\n\nhttp {\n   map $host $serverport {\n     \"hello.cluster01.local\"   \"30000\";\n     \"web01.cluster01.local\"   \"30001\";\n     \"web02.cluster01.local\"   \"30002\";\n     default      \"no_match\";\n   }\n\n   upstream hello.cluster01.local-30000 {\n      server 172.20.100.10:30000; \n      server 172.20.100.11:30000; \n   }\n\n   upstream web01.cluster01.local-30001 {\n      server 172.20.100.10:30001;\n      server 172.20.100.11:30001;\n   }\n\n   upstream web02.cluster01.local-30002 {\n      server 172.20.100.10:30002;\n      server 172.20.100.11:30002;\n   }\n\n  server {\n    listen 80;\n    server_name \"~(.*).cluster01.local\";\n    set $upstream $host-$serverport; \n    location \/ {\n      proxy_set_header X-Forwarded-For $remote_addr;\n      # if not load balancing pointing to one node like below is fine\n      #proxy_pass http:\/\/172.20.100.10:$np;\n      # with upstream you can't add a port so I have an upstream per service\n      #proxy_pass http:\/\/backend:$np;\n      proxy_pass http:\/\/$upstream;\n      proxy_set_header Host $host;\n    }\n  }\n}<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"

Mostly this is done in a cloud environment where they have Kubernetes integrated with cloud load balancers and you expose<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[119,129],"tags":[],"class_list":["post-1846","post","type-post","status-publish","format-standard","hentry","category-kubernetes","category-nginx"],"_links":{"self":[{"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/posts\/1846","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/comments?post=1846"}],"version-history":[{"count":0,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/posts\/1846\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/media?parent=1846"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/categories?post=1846"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.ls-al.com\/wp-json\/wp\/v2\/tags?post=1846"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}