Skip to main content

Horizontal Pod Autoscale & Metrics Server installation in K8S


In K8s inorder to create the HPA we just need to give a below kubectl command like below:

# kubectl autoscale deployment myapp  --min=1 --max=4 --cpu-percent=80

But this will give a series of errors like below if metrics API is not registered.

########################################
ubuntu@namenode:~$ kubectl get hpa
NAME           REFERENCE                 TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
myapp   Deployment/myapp   <unknown>/80%   1         4         2          43h

ubuntu@namenode:~$ kubectl describe horizontalpodautoscaler.autoscaling/myapp | grep Warning

  Warning  FailedComputeMetricsReplicas  32m (x10333 over 43h)   horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

  Warning  FailedGetResourceMetric       2m7s (x10452 over 43h)  horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

########################################

ubuntu@namenode:~$ kubectl get apiservice | grep metrics
ubuntu@namenode:~$

There should be something like below, for HPA to work properly.

v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        3d18h


ubuntu@namenode:~$ kubectl get --raw "/api/metrics.k8s.io/v1beta1/nodes"
Error from server (NotFound): the server could not find the requested resource

########################################

So, if you want to use k8s’ features like horizontal pod autoscaler which was depended on heapster (or even to be able to use kubectl top command) you need to use metrics-server instead.

####################
#What is metrics-server ?
####################

Metrics Server is a cluster-wide aggregator of resource usage data. It collects metrics like CPU or memory consumption for containers or nodes, from the Summary API, exposed by Kubelet on each node. If your kubernetes is formed by kube-up.sh script then you probably have a metrics-server by default. But if you use kubespray or kops to build a production ready k8s cluster, then you need to deploy separately.

Requirements
============
In order to deploy metrics-server, aggregation layer should be enabled in your k8s cluster. As of kubespray version 2.8.3, aggregation layer is already up by default. If you need to enable it especially see the link below. https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/

Confirmed that the below parameters are already set in the /etc/kubernetes/manifests/kube-apiserver.yaml

--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>


##################
Now Deployment
##################


ubuntu@namenode:~ git clone https://github.com/kubernetes-incubator/metrics-server.git
ubuntu@namenode:~ cd metrics-server

Now in the below file add the 2 parameters in the args: section just below the image:

ubuntu@namenode:~/metrics-server/deploy/1.8+$ vi metrics-server-deployment.yaml

          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP

ubuntu@namenode:~ kubectl apply -f deploy/1.8+/

ubuntu@namenode:~/metrics-server# kubectl apply -f deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

ubuntu@namenode:~$ kubectl get apiservices | grep metrics
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        3m48s


ubuntu@namenode:~$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[]}

now go for the hpa get command:

ubuntu@namenode:~$ kubectl get hpa
NAME           REFERENCE                 TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
myapp   Deployment/myapp   <unknown>/80%   1         4         2          22m

ubuntu@namenode:~$ kubectl describe hpa | grep Warning
  Warning  FailedComputeMetricsReplicas  8m20s (x61 over 23m)  horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API

This error is because you have not set resource parameters for your application while deploying it.

ubuntu@namenode:~/myk8syamls$ cat default-n-cpu-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-limit-range
  namespace: default
spec:
  limits:
  - default:
      cpu: 4
    defaultRequest:
      cpu: 2
    type: Container
 
ubuntu@namenode:~/myk8syamls$ cat default-n-memory-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
  namespace: default
spec:
  limits:
  - default:
      memory: 16Gi
    defaultRequest:
      memory: 8Gi
    type: Container
 
# kubectl create -f default-n-cpu-resource.yaml
# kubectl create -f default-n-memory-resource.yaml

ubuntu@namenode:~$ kubectl get limitrange
NAME              CREATED AT
cpu-limit-range   2019-11-15T08:41:32Z
mem-limit-range   2019-11-15T08:41:37Z
ubuntu@namenode:~$

ubuntu@namenode:~$ kubectl describe limitrange
Name:       cpu-limit-range
Namespace:  default
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   cpu       -    -    2                4              -


Name:       mem-limit-range
Namespace:  default
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   memory    -    -    8Gi              16Gi           -


Now delete the already already running deployment and re-create the deployment & HPA.

ubuntu@namenode:~$ kubectl get hpa
NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myapp   Deployment/myapp   2%/80%    1         4         2          6m40s


BOOM it is working now!!!!!!!!
Cheers.

Comments

Popular posts from this blog

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It i...

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...