Skip to main content

Nginx Ingress controller setup in K8S MultiNode Cluster with HA-Proxy as External LB

https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md


Pre-requisites:
###############
>> K8s cluster setup with 1 Master and 2 Worker nodes.
>> Deployed an application with Deployment name "client-sb"
>> Also you need to create an HA-proxy server by spinning an Ec2 instance.

After login the Ha-proxy server.

# yum install haproxy

# vi /etc/haproxy/haproxy.cfg

delete everything after the global and defaults starting from "Main front-end which proxys to the backend"

paste the below code in the end of the file:
---------------------
frontend http_front
  bind *:80
  stats uri /haproxy?stats
  default_backend http_back

backend http_back
  balance roundrobin
  server kube 10.0.1.14:80
  server kube 10.0.1.12:80

---------------------

# systemctl status haproxy
# systemctl enable haproxy
# systemctl start haproxy



1. Create a Namespace, a SA, the Default Secret, the Customization Config Map, and Custom Resource Definitions

In the master node.

# git clone https://github.com/nginxinc/kubernetes-ingress.git

# cd kubernetes-ingress/deployments

Create a namespace and a serviceAccount for the Ingress controller:
# kubectl create -f common/ns-and-sa.yaml

Create a secret with a TLS certificate and a key for the default server in NGINX
# kubectl create -f common/default-server-secret.yaml

Create a config map for customizing NGINX configuration:
# kubectl create -f common/nginx-config.yaml

2. Configure RBAC

If RBAC is enabled in your cluster, create a cluster role and bind it to the service account, created in Step 1:

# kubectl apply -f rbac/rbac.yaml

3. Deploy the Ingress Controller

We include two options for deploying the Ingress controller:

Deployment. Use a Deployment if you plan to dynamically change the number of Ingress controller replicas.
DaemonSet. Use a DaemonSet for deploying the Ingress controller on every node or a subset of nodes.

Here we are going with the DaemonSet:

# kubectl create -f daemon-set/nginx-ingress.yaml

############ ONLY IF YOU have some error and needs to Delete(OPTIONAL) ########

ubuntu@namenode:~ $ kubectl delete -n nginx-ingress daemonset.apps/nginx-ingress daemonset.apps "nginx-ingress" deleted ubuntu@namenode:~$ kubectl delete clusterrolebinding nginx-ingress -n nginx-ingress warning: deleting cluster-scoped resources, not scoped to the provided namespace clusterrolebinding.rbac.authorization.k8s.io "nginx-ingress" deleted
ubuntu@namenode:~ $ kubectl delete clusterrole nginx-ingress -n nginx-ingress warning: deleting cluster-scoped resources, not scoped to the provided namespace clusterrole.rbac.authorization.k8s.io "nginx-ingress" deleted ubuntu@namenode:~ $ kubectl delete configmap nginx-config -n nginx-ingress configmap "nginx-config" deleted

ubuntu@namenode:~ $ kubectl delete crd virtualserverroutes.k8s.nginx.org -n nginx-ingress warning: deleting cluster-scoped resources, not scoped to the provided namespace customresourcedefinition.apiextensions.k8s.io "virtualserverroutes.k8s.nginx.org" deleted
ubuntu@namenode:~ $ kubectl delete crd virtualservers.k8s.nginx.org -n nginx-ingress warning: deleting cluster-scoped resources, not scoped to the provided namespace customresourcedefinition.apiextensions.k8s.io "virtualservers.k8s.nginx.org" deleted ubuntu@namenode:~ $ kubectl delete secret default-server-secret -n nginx-ingress secret "default-server-secret" deleted ubuntu@namenode:~ $ kubectl delete sa nginx-ingress -n nginx-ingress serviceaccount "nginx-ingress" deleted ubuntu@namenode:~ $ kubectl delete ns nginx-ingress namespace "nginx-ingress" deleted

########## DELETE END#####################

# kubectl get all -n nginx-ingress

Now we need to expose our already deployed application by creating a service as ClusterIP

# kubectl expose deployment/client-sb --port 80

We need to create an ingress resource rules now as below:

---------------------------------------------------------------
Final path based yaml resource file with rewrites 

---------------------------------------------------------------
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-resource-2
  annotations:
    nginx.org/rewrites: "serviceName=nginx-deploy-blue rewrite=/;serviceName=nginx-deploy-green rewrite=/;"
spec:
  rules:
  - host: abc.abtest.tk
    http:
      paths:
        - path: "/"
          backend:
            serviceName: client-sb
            servicePort: 8080
        - path: "/blue/v2"
          backend:
            serviceName: nginx-deploy-blue
            servicePort: 80     
        - path: /green
          backend:
            serviceName: nginx-deploy-green
            servicePort: 80
status:
  loadBalancer: {}
---------------------------------------------------------------
           
# kubectl create -f ingress-resource.yml

# kubectl get ing

# kubectl describe ing



When we run this command , a hosting entry will be added in the nginx configuration file inside the nginx-ingress-controller pod like below:

ubuntu@namenode:~$ kubectl -n nginx-ingress get all | grep pod

pod/nginx-ingress-h5bj8   1/1     Running   0          4d23h
pod/nginx-ingress-zzfmz   1/1     Running   0          4d23h

ubuntu@namenode:~$ kubectl -n nginx-ingress exec -it nginx-ingress-h5bj8 sh
$ cd /etc/nginx/conf.d/
$ cat default-ingress-resource-2.conf | head -30
# configuration for default/ingress-resource-2


upstream default-ingress-resource-2-blue.client.com-nginx-deploy-blue-80 {
zone default-ingress-resource-2-blue.client.com-nginx-deploy-blue-80 256k;
random two least_conn;

server 10.244.1.26:80 max_fails=1 fail_timeout=10s max_conns=0;
server 10.244.2.21:80 max_fails=1 fail_timeout=10s max_conns=0;

}
upstream default-ingress-resource-2-green.client.com-nginx-deploy-green-80 {
zone default-ingress-resource-2-green.client.com-nginx-deploy-green-80 256k;
random two least_conn;

server 10.244.1.23:80 max_fails=1 fail_timeout=10s max_conns=0;
server 10.244.1.27:80 max_fails=1 fail_timeout=10s max_conns=0;
server 10.244.2.22:80 max_fails=1 fail_timeout=10s max_conns=0;

}
upstream default-ingress-resource-2-abc.abtest.tk-client-sb-8080 {
zone default-ingress-resource-2-abc.abtest.tk-client-sb-8080 256k;
random two least_conn;

server 10.244.2.15:8080 max_fails=1 fail_timeout=10s max_conns=0;
server 10.244.2.16:8080 max_fails=1 fail_timeout=10s max_conns=0;

}


-----------------------------------------------
Example ingress of Hostname Based Routing:
-----------------------------------------------
ubuntu@namenode:~$ kubectl describe ing
Name:             ingress-resource-2
Namespace:        default
Address:       
Default backend:  default-http-backend:80 ()
Rules:
  Host                Path  Backends
  ----                ----  --------
  api.client.com 
                         client-sb:8080 (10.244.2.15:8080,10.244.2.16:8080)
  blue.client.com 
                         nginx-deploy-blue:80 (10.244.1.26:80,10.244.2.21:80)
  green.client.com
                         nginx-deploy-green:80 (10.244.1.23:80,10.244.1.27:80,10.244.2.22:80)

--------------------------------------------------
 Example ingress of Path Based Routing:
--------------------------------------------------

ubuntu@namenode:~$ kubectl describe  ing
Name:             ingress-resource-2
Namespace:        default
Address:       
Default backend:  default-http-backend:80 ()
Rules:
  Host           Path  Backends
  ----           ----  --------
  abc.abtest.tk
                 /          client-sb:8080 (10.244.2.15:8080,10.244.2.16:8080)
                 /blue/v2   nginx-deploy-blue:80 (10.244.1.26:80,10.244.2.21:80)
                 /green     nginx-deploy-green:80 (10.244.1.23:80,10.244.2.22:80)
                 /nfs       nginx-deploy-nfs:80 (10.244.1.35:80)
Annotations:
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.org/rewrites":"serviceName=nginx-deploy-blue rewrite=/;serviceName=nginx-deploy-green rewrite=/;serviceName=nginx-deploy-nfs rewrite=/;"},"name":"ingress-resource-2","namespace":"default"},"spec":{"rules":[{"host":"abc.abtest.tk","http":{"paths":[{"backend":{"serviceName":"client-sb","servicePort":8080},"path":"/"},{"backend":{"serviceName":"nginx-deploy-blue","servicePort":80},"path":"/blue/v2"},{"backend":{"serviceName":"nginx-deploy-green","servicePort":80},"path":"/green"},{"backend":{"serviceName":"nginx-deploy-nfs","servicePort":80},"path":"/nfs"}]}}]},"status":{"loadBalancer":{}}}

  nginx.org/rewrites:  serviceName=nginx-deploy-blue rewrite=/;serviceName=nginx-deploy-green rewrite=/;serviceName=nginx-deploy-nfs rewrite=/;
Events:               



--------------------------------------------------

ubuntu@namenode:~$ curl http://10.244.1.26:80

I am BLUE



ubuntu@namenode:~$ curl http://10.244.2.16:8080
Hello Docker World

ubuntu@namenode:~$ curl http://10.244.1.23:80

I am GREEN



----------------------------------

ubuntu@namenode:~$ kubectl get svc
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes           ClusterIP   10.96.0.1                443/TCP    6d19h
nginx-deploy-blue    ClusterIP   10.100.56.114            80/TCP     3d18h
nginx-deploy-green   ClusterIP   10.110.174.9             80/TCP     3d18h
nginx-deploy-main    ClusterIP   10.111.178.20            80/TCP     3d19h
client-sb          ClusterIP   10.111.206.225           8080/TCP   3d23h


ubuntu@namenode:~$ curl 10.111.206.225:8080
Hello Docker World

ubuntu@namenode:~$ curl 10.100.56.114:80

I am BLUE



ubuntu@namenode:~$ curl 10.110.174.9:80

I am GREEN



----------------------------------

[ec2-user@haproxy-lb ~]$ telnet 10.0.1.14 80
Trying 10.0.1.14...
Connected to 10.0.1.14.
Escape character is '^]'.
^]
telnet> quit

[ec2-user@haproxy-lb ~]$ telnet 10.0.1.14 443
Trying 10.0.1.14...
Connected to 10.0.1.14.
Escape character is '^]'.
^]
telnet> quit

----------------------------------

if there is no hostname specified then we will get the below error:

404 Not Found
nginx/1.17.5
----------------------------------


Now take the ip of HP proxy and add that to the /etc/hosts file or do the DNS A record part, and take your domain in your favorite browser.


Boom , that is all,

Cheers!!!!







Comments

Popular posts from this blog

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It i...

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...