Error 1
######
root@ndz-Lenovo-ideapad-320-15ISK ~ # docker-compose --version
Traceback (most recent call last):
######
root@ndz-Lenovo-ideapad-320-15ISK ~ # docker-compose --version
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 9, in
load_entry_point('docker-compose==1.8.0', 'console_scripts', 'docker-compose')()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 542, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2569, in load_entry_point
return ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2229, in load
return self.resolve()
File "/usr/lib/python2.7/dist-packages/requests/compat.py", line 42, in
from .packages.urllib3.packages.ordered_dict import OrderedDict
ImportError: No module named ordered_dict
Fix
###
pip uninstall urllib3
pip install urllib3
root@ndz-Lenovo-ideapad-320-15ISK ~ # docker-compose --version
docker-compose version 1.8.0, build unknown
Ref
###
https://stackoverflow.com/questions/53257338/docker-compose-script-complaining-about-a-python-module-import
#########################################
Error 2
#########################################
Pod log file is getting the below error.
E0110 05:02:23.166845 1 reflector.go:123] /home/ec2-user/workspace/PI_IC_kubernetes-ingress_master/internal/k8s/controller.go:340: Failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)
E0110 05:02:23.167659 1 reflector.go:123] /home/ec2-user/workspace/PI_IC_kubernetes-ingress_master/internal/k8s/controller.go:341: Failed to list *v1.VirtualServerRoute: the server could not find the requested resource (get virtualserverroutes.k8s.nginx.org)
Fix:
###
git clone https://github.com/nginxinc/kubernetes-ingress.git
kubectl apply -f kubernetes-ingress/deployments/common/custom-resource-definitions.yaml
Ref:
####
https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
Section 2.3
#########################################
Error 3
#########################################
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/80% 1 4 2 43h
Fix
####
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/80% 1 4 2 43h
ubuntu@namenode:~$ kubectl describe horizontalpodautoscaler.autoscaling/myapp | grep Warning
Warning FailedComputeMetricsReplicas 32m (x10333 over 43h) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 2m7s (x10452 over 43h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
########################################
ubuntu@namenode:~$ kubectl get apiservice | grep metrics
ubuntu@namenode:~$
There should be something like below, for HPA to work properly.
v1beta1.metrics.k8s.io kube-system/metrics-server True 3d18h
ubuntu@namenode:~$ kubectl get --raw "/api/metrics.k8s.io/v1beta1/nodes"
Error from server (NotFound): the server could not find the requested resource
########################################
So, if you want to use k8s’ features like horizontal pod autoscaler which was depended on heapster (or even to be able to use kubectl top command) you need to use metrics-server instead.
####################
#What is metrics-server ?
####################
Metrics Server is a cluster-wide aggregator of resource usage data. It collects metrics like CPU or memory consumption for containers or nodes, from the Summary API, exposed by Kubelet on each node. If your kubernetes is formed by kube-up.sh script then you probably have a metrics-server by default. But if you use kubespray or kops to build a production ready k8s cluster, then you need to deploy separately.
Requirements
============
In order to deploy metrics-server, aggregation layer should be enabled in your k8s cluster. As of kubespray version 2.8.3, aggregation layer is already up by default. If you need to enable it especially see the link below. https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/
Confirmed that the below parameters are already set in the /etc/kubernetes/manifests/kube-apiserver.yaml
--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>
##################
Now Deployment
##################
ubuntu@namenode:~ git clone https://github.com/kubernetes-incubator/metrics-server.git
ubuntu@namenode:~ cd metrics-server
Now in the below file add the 2 parameters in the args: section just below the image:
ubuntu@namenode:~/metrics-server/deploy/1.8+$ vi metrics-server-deployment.yaml
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
ubuntu@namenode:~ kubectl apply -f deploy/1.8+/
ubuntu@namenode:~/metrics-server# kubectl apply -f deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
ubuntu@namenode:~$ kubectl get apiservices | grep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server True 3m48s
ubuntu@namenode:~$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[]}
now go for the hpa get command:
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/80% 1 4 2 22m
ubuntu@namenode:~$ kubectl describe hpa | grep Warning
Warning FailedComputeMetricsReplicas 8m20s (x61 over 23m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API
This error is because you have not set resource parameters for your application while deploying it.
ubuntu@namenode:~/myk8syamls$ cat default-n-cpu-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: default
spec:
limits:
- default:
cpu: 4
defaultRequest:
cpu: 2
type: Container
ubuntu@namenode:~/myk8syamls$ cat default-n-memory-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: default
spec:
limits:
- default:
memory: 16Gi
defaultRequest:
memory: 8Gi
type: Container
# kubectl create -f default-n-cpu-resource.yaml
# kubectl create -f default-n-memory-resource.yaml
ubuntu@namenode:~$ kubectl get limitrange
NAME CREATED AT
cpu-limit-range 2019-11-15T08:41:32Z
mem-limit-range 2019-11-15T08:41:37Z
ubuntu@namenode:~$
ubuntu@namenode:~$ kubectl describe limitrange
Name: cpu-limit-range
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 2 4 -
Name: mem-limit-range
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 8Gi 16Gi -
Now delete the already already running deployment and re-create the deployment & HPA.
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 2%/80% 1 4 2 6m40s
#########################################
Error 4
#########################################
when creating grafana with persistent volume enabled , will get an error in pod creation.
Fix:
####
Add the no_root_squash in the NFS export file.
# vi /etc/exports
/srv/nfs/k8sdata *(rw,no_subtree_check,no_root_squash,insecure)
:wq!
# exportfs -rav
# exportfs -v
/srv/nfs/k8sdata
(rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)
#########################################
Error 5
#########################################
Kubelet will not start when swap memory is enabled in the server.
Fix:
####
# sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add the KUBELET_EXTRA_ARGS line as below:
----------------------------------------
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Now kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units
# sudo systemctl daemon-reload
# sudo systemctl restart kubelet
# sudo systemctl status kubelet
#########################################
Error 6
#########################################
Warning FailedCreatePodSandBox 21s kubelet, dn2 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "45f5fede972ca436c32e8e39c87e87a016a9c868d8f40bd425bdf315d379c4bd" network for pod "myclient-api-6bb5ddd8d5-4lj8j": networkPlugin cni failed to set up pod "myclient-api-6bb5ddd8d5-4lj8j_default" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24
Error 2
#########################################
Pod log file is getting the below error.
E0110 05:02:23.166845 1 reflector.go:123] /home/ec2-user/workspace/PI_IC_kubernetes-ingress_master/internal/k8s/controller.go:340: Failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)
E0110 05:02:23.167659 1 reflector.go:123] /home/ec2-user/workspace/PI_IC_kubernetes-ingress_master/internal/k8s/controller.go:341: Failed to list *v1.VirtualServerRoute: the server could not find the requested resource (get virtualserverroutes.k8s.nginx.org)
Fix:
###
git clone https://github.com/nginxinc/kubernetes-ingress.git
kubectl apply -f kubernetes-ingress/deployments/common/custom-resource-definitions.yaml
Ref:
####
https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
Section 2.3
#########################################
Error 3
#########################################
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/80% 1 4 2 43h
Fix
####
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/80% 1 4 2 43h
ubuntu@namenode:~$ kubectl describe horizontalpodautoscaler.autoscaling/myapp | grep Warning
Warning FailedComputeMetricsReplicas 32m (x10333 over 43h) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 2m7s (x10452 over 43h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
########################################
ubuntu@namenode:~$ kubectl get apiservice | grep metrics
ubuntu@namenode:~$
There should be something like below, for HPA to work properly.
v1beta1.metrics.k8s.io kube-system/metrics-server True 3d18h
ubuntu@namenode:~$ kubectl get --raw "/api/metrics.k8s.io/v1beta1/nodes"
Error from server (NotFound): the server could not find the requested resource
########################################
So, if you want to use k8s’ features like horizontal pod autoscaler which was depended on heapster (or even to be able to use kubectl top command) you need to use metrics-server instead.
####################
#What is metrics-server ?
####################
Metrics Server is a cluster-wide aggregator of resource usage data. It collects metrics like CPU or memory consumption for containers or nodes, from the Summary API, exposed by Kubelet on each node. If your kubernetes is formed by kube-up.sh script then you probably have a metrics-server by default. But if you use kubespray or kops to build a production ready k8s cluster, then you need to deploy separately.
Requirements
============
In order to deploy metrics-server, aggregation layer should be enabled in your k8s cluster. As of kubespray version 2.8.3, aggregation layer is already up by default. If you need to enable it especially see the link below. https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/
Confirmed that the below parameters are already set in the /etc/kubernetes/manifests/kube-apiserver.yaml
--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>
Now Deployment
##################
ubuntu@namenode:~ git clone https://github.com/kubernetes-incubator/metrics-server.git
ubuntu@namenode:~ cd metrics-server
Now in the below file add the 2 parameters in the args: section just below the image:
ubuntu@namenode:~/metrics-server/deploy/1.8+$ vi metrics-server-deployment.yaml
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
ubuntu@namenode:~ kubectl apply -f deploy/1.8+/
ubuntu@namenode:~/metrics-server# kubectl apply -f deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
ubuntu@namenode:~$ kubectl get apiservices | grep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server True 3m48s
ubuntu@namenode:~$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[]}
now go for the hpa get command:
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/80% 1 4 2 22m
ubuntu@namenode:~$ kubectl describe hpa | grep Warning
Warning FailedComputeMetricsReplicas 8m20s (x61 over 23m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API
This error is because you have not set resource parameters for your application while deploying it.
ubuntu@namenode:~/myk8syamls$ cat default-n-cpu-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: default
spec:
limits:
- default:
cpu: 4
defaultRequest:
cpu: 2
type: Container
ubuntu@namenode:~/myk8syamls$ cat default-n-memory-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: default
spec:
limits:
- default:
memory: 16Gi
defaultRequest:
memory: 8Gi
type: Container
# kubectl create -f default-n-cpu-resource.yaml
# kubectl create -f default-n-memory-resource.yaml
ubuntu@namenode:~$ kubectl get limitrange
NAME CREATED AT
cpu-limit-range 2019-11-15T08:41:32Z
mem-limit-range 2019-11-15T08:41:37Z
ubuntu@namenode:~$
ubuntu@namenode:~$ kubectl describe limitrange
Name: cpu-limit-range
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 2 4 -
Name: mem-limit-range
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 8Gi 16Gi -
Now delete the already already running deployment and re-create the deployment & HPA.
ubuntu@namenode:~$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 2%/80% 1 4 2 6m40s
#########################################
Error 4
#########################################
when creating grafana with persistent volume enabled , will get an error in pod creation.
Fix:
####
Add the no_root_squash in the NFS export file.
# vi /etc/exports
/srv/nfs/k8sdata *(rw,no_subtree_check,no_root_squash,insecure)
:wq!
# exportfs -rav
# exportfs -v
/srv/nfs/k8sdata
(rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)
#########################################
Error 5
#########################################
Kubelet will not start when swap memory is enabled in the server.
Fix:
####
# sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add the KUBELET_EXTRA_ARGS line as below:
----------------------------------------
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Now kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units
# sudo systemctl daemon-reload
# sudo systemctl restart kubelet
# sudo systemctl status kubelet
#########################################
Error 6
#########################################
Warning FailedCreatePodSandBox 21s kubelet, dn2 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "45f5fede972ca436c32e8e39c87e87a016a9c868d8f40bd425bdf315d379c4bd" network for pod "myclient-api-6bb5ddd8d5-4lj8j": networkPlugin cni failed to set up pod "myclient-api-6bb5ddd8d5-4lj8j_default" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24
Fix:
####
This issue occurs when we do multiple kubeadm reset in wrong way. Because the kubeadm reset will not delete the Iptables rules and cni0 and docker network interfaces. Reset the kubeadmn in worker node having this error as below:
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
rm -rf /var/lib/etcd/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
sudo ip link delete cni0
sudo ip link delete flannel.1
systemctl start docker
systemctl start kubelet
Now join the worker node using kubeam join command.
#########################################
Error 7
#########################################
Comments