Skip to main content

Proper way of Kubeadm reset


Login to the Master Node:
#####################
sudo kubeadm reset
sudo systemctl stop docker && sudo systemctl stop kubelet
sudo rm -rf /etc/kubernetes/
sudo rm -rf .kube/
sudo rm -rf /var/lib/kubelet/
sudo rm -rf /var/lib/cni/
sudo rm -rf /etc/cni/
sudo rm -rf /var/lib/etcd/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
sudo ip link delete cni0
sudo ip link delete flannel.1
kubeadm reset does not delete any of the iptables rules it originally created. In other words, if you try to bootstrap your cluster with a different pod networking CIDR range or different networking options, you might run into trouble.
Please note if you are using a firewall configuration tool like ufw, which uses iptables as system-of-record, the commands below might render your system inaccessible.
Because of this, we recommend that you flush all iptables rules:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
after you run kubeadm reset and before you re-bootstrap the node (kubeadm init) with new parameters.
This ensures that you really have a blank slate, and potentially saves you a lot of nasty network debugging.
sudo systemctl start docker && sudo systemctl start kubelet
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=<Specify a stable IP address or DNS name for the control plane >


# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

Comments

Popular posts from this blog

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...

Running K8s cluster service kubelet with Swap Memory Enabled

For enabling swap memory check the below link : https://jinojoseph.blogspot.com/2019/10/enable-swap-memory-using-swapfile-in.html # sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Add the KUBELET_EXTRA_ARGS line as below: ---------------------------------------- Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS Now kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units # sudo systemctl daemon-reload # sudo systemctl restart kubelet # sudo systemctl status kubelet That is all cheers :p