Skip to main content

Controllers in K8s


CONTROLLERS

Controllers are the brain behind the k8s. They are the processes that monitor the k8s objects and respond accordingly.


Replication Controller:

The replication controller helps us to run multiple instances of a single pod in a k8s cluster, thus providing High Availability.

They can also replace a Single failed pod, thus provide HA even without multiple instance of PODs.

It also helps in Load Balancing and Scaling. Another reason we need replicaiton controller is to create mulltiple pods and to share the load between them. At first it will increase the number of pods in the same node when demand increases, say when the user is increasing. After the node has reached its bottleneck , we will create additional pods in another new nodes. Thus the replication controller spans across multiple nodes in the cluster.

Replication Controller Vs Replica Set:

They both have the same purpose, but they are not the same.

Replica Set is the new technology for replication setup in k8s, the features that we said about Replicaton Controller is also applicable to Replica Set too.

Replication Controller definition: rc-definition.yml

apiVersion: v1
kind: ReplicationController
metadata:
  name: myapp-rc
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
      spec:
        containers:
          - name: nginx-image
            image: nginx
  replicas: 3
# kubectl create -f rc-definition.yml
# kubectl get replicationcontroller
# kubectl get pods
Replica Set Definition : replicaset-definition.yml

The mager difference between the Replication Controller and the Replica Set is there is an another child (selector: )for the component spec: in Replica Set

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-replicaset
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
      spec:
        containers:
          - name: nginx-image
            image: nginx
  replicas: 3
  selector:
    matchLabels:
      type: front-end
# kubectl create -f replicaset-definition.yml
# kubectl get replicaset
# kubectl get pods
# kubectl delete replicaset myapp-replicaset
Scale the Replica Set:

First need to edit the “replicas: 3” section and change that to “replicas: 6” in replicaset-definition.yml
Now execute the below commands to scale the replica sets to 6 replicas
#  kubectl replace -f replicaset-definition.yml
    OR
# kubectl scale --replicas=6 -f replicaset-definition.yml
    OR
# kubectl scale --replicas=6 replicaset myapp-replicaset
Where ( replicaset -> Type ) & ( myapp-replicaset -> Name of replicaset )


NB: The value in the replicaset-definition.yml file will remain 3 , even after you update the replicas value to 6  with the “kubectl scale” command method using replicaset-definition.yml file.

Comments

Popular posts from this blog

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It i...

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...