Skip to main content

Docker & K8S Errors & Fixes.

Error 1
######

root@ndz-Lenovo-ideapad-320-15ISK ~ # docker-compose --version
Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 9, in
    load_entry_point('docker-compose==1.8.0', 'console_scripts', 'docker-compose')()
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 542, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2569, in load_entry_point
    return ep.load()
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2229, in load
    return self.resolve()
  File "/usr/lib/python2.7/dist-packages/requests/compat.py", line 42, in
    from .packages.urllib3.packages.ordered_dict import OrderedDict
ImportError: No module named ordered_dict

Fix
###

pip uninstall urllib3
pip install urllib3

root@ndz-Lenovo-ideapad-320-15ISK ~ # docker-compose --version
docker-compose version 1.8.0, build unknown

Ref
###
https://stackoverflow.com/questions/53257338/docker-compose-script-complaining-about-a-python-module-import

#########################################
Error 2
#########################################

Pod log file is getting the below error.

E0110 05:02:23.166845       1 reflector.go:123] /home/ec2-user/workspace/PI_IC_kubernetes-ingress_master/internal/k8s/controller.go:340: Failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)

E0110 05:02:23.167659 1 reflector.go:123] /home/ec2-user/workspace/PI_IC_kubernetes-ingress_master/internal/k8s/controller.go:341: Failed to list *v1.VirtualServerRoute: the server could not find the requested resource (get virtualserverroutes.k8s.nginx.org)

Fix:
###

git clone https://github.com/nginxinc/kubernetes-ingress.git
kubectl apply -f kubernetes-ingress/deployments/common/custom-resource-definitions.yaml

Ref:
####
https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
Section 2.3
#########################################
Error 3
#########################################

ubuntu@namenode:~$ kubectl get hpa

NAME           REFERENCE                 TARGETS         MINPODS   MAXPODS   REPLICAS   AGE

myapp   Deployment/myapp   <unknown>/80%   1         4         2          43h

Fix
####

ubuntu@namenode:~$ kubectl get hpa
NAME           REFERENCE                 TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
myapp   Deployment/myapp   <unknown>/80%   1         4         2          43h

ubuntu@namenode:~$ kubectl describe horizontalpodautoscaler.autoscaling/myapp | grep Warning

  Warning  FailedComputeMetricsReplicas  32m (x10333 over 43h)   horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

  Warning  FailedGetResourceMetric       2m7s (x10452 over 43h)  horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

########################################

ubuntu@namenode:~$ kubectl get apiservice | grep metrics
ubuntu@namenode:~$

There should be something like below, for HPA to work properly.

v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        3d18h


ubuntu@namenode:~$ kubectl get --raw "/api/metrics.k8s.io/v1beta1/nodes"
Error from server (NotFound): the server could not find the requested resource

########################################

So, if you want to use k8s’ features like horizontal pod autoscaler which was depended on heapster (or even to be able to use kubectl top command) you need to use metrics-server instead.

####################
#What is metrics-server ?
####################

Metrics Server is a cluster-wide aggregator of resource usage data. It collects metrics like CPU or memory consumption for containers or nodes, from the Summary API, exposed by Kubelet on each node. If your kubernetes is formed by kube-up.sh script then you probably have a metrics-server by default. But if you use kubespray or kops to build a production ready k8s cluster, then you need to deploy separately.

Requirements
============
In order to deploy metrics-server, aggregation layer should be enabled in your k8s cluster. As of kubespray version 2.8.3, aggregation layer is already up by default. If you need to enable it especially see the link below. https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/

Confirmed that the below parameters are already set in the /etc/kubernetes/manifests/kube-apiserver.yaml

--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>


##################
Now Deployment
##################


ubuntu@namenode:~ git clone https://github.com/kubernetes-incubator/metrics-server.git
ubuntu@namenode:~ cd metrics-server

Now in the below file add the 2 parameters in the args: section just below the image:

ubuntu@namenode:~/metrics-server/deploy/1.8+$ vi metrics-server-deployment.yaml

          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP

ubuntu@namenode:~ kubectl apply -f deploy/1.8+/

ubuntu@namenode:~/metrics-server# kubectl apply -f deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

ubuntu@namenode:~$ kubectl get apiservices | grep metrics
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        3m48s


ubuntu@namenode:~$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[]}

now go for the hpa get command:

ubuntu@namenode:~$ kubectl get hpa
NAME           REFERENCE                 TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
myapp   Deployment/myapp   <unknown>/80%   1         4         2          22m

ubuntu@namenode:~$ kubectl describe hpa | grep Warning
  Warning  FailedComputeMetricsReplicas  8m20s (x61 over 23m)  horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API

This error is because you have not set resource parameters for your application while deploying it.

ubuntu@namenode:~/myk8syamls$ cat default-n-cpu-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-limit-range
  namespace: default
spec:
  limits:
  - default:
      cpu: 4
    defaultRequest:
      cpu: 2
    type: Container

ubuntu@namenode:~/myk8syamls$ cat default-n-memory-resource.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
  namespace: default
spec:
  limits:
  - default:
      memory: 16Gi
    defaultRequest:
      memory: 8Gi
    type: Container

# kubectl create -f default-n-cpu-resource.yaml
# kubectl create -f default-n-memory-resource.yaml

ubuntu@namenode:~$ kubectl get limitrange
NAME              CREATED AT
cpu-limit-range   2019-11-15T08:41:32Z
mem-limit-range   2019-11-15T08:41:37Z
ubuntu@namenode:~$

ubuntu@namenode:~$ kubectl describe limitrange
Name:       cpu-limit-range
Namespace:  default
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   cpu       -    -    2                4              -


Name:       mem-limit-range
Namespace:  default
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   memory    -    -    8Gi              16Gi           -


Now delete the already already running deployment and re-create the deployment & HPA.

ubuntu@namenode:~$ kubectl get hpa
NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myapp   Deployment/myapp   2%/80%    1         4         2          6m40s

#########################################
Error 4
#########################################

when creating grafana with persistent volume enabled , will get an error in pod creation.

Fix:
####
Add the no_root_squash in the NFS export file.

# vi /etc/exports

/srv/nfs/k8sdata *(rw,no_subtree_check,no_root_squash,insecure)

:wq!

# exportfs -rav

# exportfs -v
/srv/nfs/k8sdata
  (rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)

#########################################
Error 5
#########################################

Kubelet will not start when swap memory is enabled in the server.

Fix:
####

# sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Add the KUBELET_EXTRA_ARGS line as below:
----------------------------------------

Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS


Now kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units

# sudo systemctl daemon-reload
# sudo systemctl restart kubelet
# sudo systemctl status kubelet

#########################################
Error 6
#########################################

 Warning  FailedCreatePodSandBox  21s                 kubelet, dn2       Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "45f5fede972ca436c32e8e39c87e87a016a9c868d8f40bd425bdf315d379c4bd" network for pod "myclient-api-6bb5ddd8d5-4lj8j": networkPlugin cni failed to set up pod "myclient-api-6bb5ddd8d5-4lj8j_default" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

Fix:
####
This issue occurs when we do multiple kubeadm reset in wrong way. Because the kubeadm reset will not delete the Iptables rules and cni0 and docker network interfaces. Reset the kubeadmn in worker node having this error as below:

kubeadm reset

systemctl stop kubelet
systemctl stop docker

rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
rm -rf /var/lib/etcd/

ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down

sudo ip link delete cni0
sudo ip link delete flannel.1

systemctl start docker
systemctl start kubelet

Now join the worker node using kubeam join command.

#########################################
Error 7
#########################################

Comments

Popular posts from this blog

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It i...

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...