Skip to main content

Controllers in K8s


CONTROLLERS

Controllers are the brain behind the k8s. They are the processes that monitor the k8s objects and respond accordingly.


Replication Controller:

The replication controller helps us to run multiple instances of a single pod in a k8s cluster, thus providing High Availability.

They can also replace a Single failed pod, thus provide HA even without multiple instance of PODs.

It also helps in Load Balancing and Scaling. Another reason we need replicaiton controller is to create mulltiple pods and to share the load between them. At first it will increase the number of pods in the same node when demand increases, say when the user is increasing. After the node has reached its bottleneck , we will create additional pods in another new nodes. Thus the replication controller spans across multiple nodes in the cluster.

Replication Controller Vs Replica Set:

They both have the same purpose, but they are not the same.

Replica Set is the new technology for replication setup in k8s, the features that we said about Replicaton Controller is also applicable to Replica Set too.

Replication Controller definition: rc-definition.yml

apiVersion: v1
kind: ReplicationController
metadata:
  name: myapp-rc
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
      spec:
        containers:
          - name: nginx-image
            image: nginx
  replicas: 3
# kubectl create -f rc-definition.yml
# kubectl get replicationcontroller
# kubectl get pods
Replica Set Definition : replicaset-definition.yml

The mager difference between the Replication Controller and the Replica Set is there is an another child (selector: )for the component spec: in Replica Set

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-replicaset
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
      spec:
        containers:
          - name: nginx-image
            image: nginx
  replicas: 3
  selector:
    matchLabels:
      type: front-end
# kubectl create -f replicaset-definition.yml
# kubectl get replicaset
# kubectl get pods
# kubectl delete replicaset myapp-replicaset
Scale the Replica Set:

First need to edit the “replicas: 3” section and change that to “replicas: 6” in replicaset-definition.yml
Now execute the below commands to scale the replica sets to 6 replicas
#  kubectl replace -f replicaset-definition.yml
    OR
# kubectl scale --replicas=6 -f replicaset-definition.yml
    OR
# kubectl scale --replicas=6 replicaset myapp-replicaset
Where ( replicaset -> Type ) & ( myapp-replicaset -> Name of replicaset )


NB: The value in the replicaset-definition.yml file will remain 3 , even after you update the replicas value to 6  with the “kubectl scale” command method using replicaset-definition.yml file.

Comments

Popular posts from this blog

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...

Running K8s cluster service kubelet with Swap Memory Enabled

For enabling swap memory check the below link : https://jinojoseph.blogspot.com/2019/10/enable-swap-memory-using-swapfile-in.html # sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Add the KUBELET_EXTRA_ARGS line as below: ---------------------------------------- Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS Now kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units # sudo systemctl daemon-reload # sudo systemctl restart kubelet # sudo systemctl status kubelet That is all cheers :p