In K8s inorder to create the HPA we just need to give a below kubectl command like below:
# kubectl autoscale deployment myapp --min=1 --max=4 --cpu-percent=80
But this will give a series of errors like below if metrics API is not registered.
########################################
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/80% 1 4 2 43h
ubuntu@namenode:~$ kubectl describe horizontalpodautoscaler.autoscaling/myapp | grep Warning
Warning FailedComputeMetricsReplicas 32m (x10333 over 43h) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 2m7s (x10452 over 43h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
########################################
ubuntu@namenode:~$ kubectl get apiservice | grep metrics
ubuntu@namenode:~$
There should be something like below, for HPA to work properly.
v1beta1.metrics.k8s.io kube-system/metrics-server True 3d18h
ubuntu@namenode:~$ kubectl get --raw "/api/metrics.k8s.io/v1beta1/nodes"
Error from server (NotFound): the server could not find the requested resource
########################################
So, if you want to use k8s’ features like horizontal pod autoscaler which was depended on heapster (or even to be able to use kubectl top command) you need to use metrics-server instead.
####################
#What is metrics-server ?
####################
Metrics Server is a cluster-wide aggregator of resource usage data. It collects metrics like CPU or memory consumption for containers or nodes, from the Summary API, exposed by Kubelet on each node. If your kubernetes is formed by kube-up.sh script then you probably have a metrics-server by default. But if you use kubespray or kops to build a production ready k8s cluster, then you need to deploy separately.
Requirements
============
In order to deploy metrics-server, aggregation layer should be enabled in your k8s cluster. As of kubespray version 2.8.3, aggregation layer is already up by default. If you need to enable it especially see the link below. https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/
Confirmed that the below parameters are already set in the /etc/kubernetes/manifests/kube-apiserver.yaml
--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>
Now Deployment
##################
ubuntu@namenode:~ git clone https://github.com/kubernetes-incubator/metrics-server.git
ubuntu@namenode:~ cd metrics-server
Now in the below file add the 2 parameters in the args: section just below the image:
ubuntu@namenode:~/metrics-server/deploy/1.8+$ vi metrics-server-deployment.yaml
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
ubuntu@namenode:~ kubectl apply -f deploy/1.8+/
ubuntu@namenode:~/metrics-server# kubectl apply -f deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
ubuntu@namenode:~$ kubectl get apiservices | grep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server True 3m48s
ubuntu@namenode:~$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[]}
now go for the hpa get command:
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/80% 1 4 2 22m
ubuntu@namenode:~$ kubectl describe hpa | grep Warning
Warning FailedComputeMetricsReplicas 8m20s (x61 over 23m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API
This error is because you have not set resource parameters for your application while deploying it.
ubuntu@namenode:~/myk8syamls$ cat default-n-cpu-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: default
spec:
limits:
- default:
cpu: 4
defaultRequest:
cpu: 2
type: Container
ubuntu@namenode:~/myk8syamls$ cat default-n-memory-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: default
spec:
limits:
- default:
memory: 16Gi
defaultRequest:
memory: 8Gi
type: Container
# kubectl create -f default-n-cpu-resource.yaml
# kubectl create -f default-n-memory-resource.yaml
ubuntu@namenode:~$ kubectl get limitrange
NAME CREATED AT
cpu-limit-range 2019-11-15T08:41:32Z
mem-limit-range 2019-11-15T08:41:37Z
ubuntu@namenode:~$
ubuntu@namenode:~$ kubectl describe limitrange
Name: cpu-limit-range
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 2 4 -
Name: mem-limit-range
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 8Gi 16Gi -
Now delete the already already running deployment and re-create the deployment & HPA.
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 2%/80% 1 4 2 6m40s
BOOM it is working now!!!!!!!!
Cheers.
Comments