Skip to main content

INSTALLING K8S AS A CLUSTER USING KUBEADM


#############################################################################
Step1: We need multiple systems or VM s created for configuring multinode cluster.

Step2: Install container runtime engine (docker) on all the master and worker nodes.

Step3: Install kubeadm ( Pronounced as kubeadmin) on all the master and worker nodes.

Step4: Initialize the master server in the Master node.

Step5: Make sure a POD Network connection is configured between master and worker nodes.

Step6: Join the worker node to the master node.

#############################################################################

UBUNTU
#########


Execute the below commands in all the nodes:
# apt-get update
# sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# sudo apt-key fingerprint 0EBFCD88
# sudo add-apt-repository \
      "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
# sudo apt-get update
# sudo apt-get install docker-ce docker-ce-cli containerd.io
The above command will install the docker latest version.
We need to install the kubernetes supported version of docker inorder to work the docker and kubernetes without any issue. Inorder to find the supported version of docker go to the
and search for “docker version” , you will see a line like below:
The validated docker versions are the same as for v1.8: 1.11.2 to 1.13.1 and 17.03.x
To install a specific version of Docker Engine - Community, list the available versions in the repo, then select and install:
a. List the versions available in your repo:
$ apt-cache madison docker-ce
 docker-ce | 5:18.09.1~3-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu  xenial/stable amd64 Packages
$ sudo apt-get install docker-ce= docker-ce-cli= containerd.io
For example the command will be like below:
$ sudo apt-get install docker-ce=5:18.09.1~3-0~ubuntu-xenial  docker-ce-cli=5:18.09.1~3-0~ubuntu-xenial  containerd.io


kubeadm installation commands : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/


Execute the below commands in all the nodes:
# apt-get update && apt-get install -y apt-transport-https curl
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat << EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
# apt-get update
# apt-get install -y kubelet kubeadm kubectl
# apt-mark hold kubelet kubeadm kubectl
Initializing the Master:
Execute the below command in the Master node:
# kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=StableIP
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
Installing a POD Network ADD-ON:

Execute the below command in the Master node:
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is Running in the output of kubectl get pods --all-namespaces. And once the CoreDNS pod is up and running, you can continue by joining your nodes.
Join Worker Nodes to the Master:


Execute the below command in the worker Nodes, that you got as output of kubeadm init
# sudo kubeadm join --token 9238u4.2348u9jo9e8ul 10.0.1.13:6443 --discovery-token-ca-cert-hash sha256:lj989sf9u8sdflljsdf98sy
After joining the nodes you will get a msg “Successfully establish connection with the API server 10.0.1.13:6443”.

Now you can check the status of Join by giving the below command in the Master node:

# kubectl get nodes
# kubectl run nginx --image=nginx
# kubectl get pods -o wide # This will incldue IP and NODE of pods belogs to.
# kubectl delete deployment/nginx

##################################
####### REDHAT ##############
##################################


Section 1: K8s Cluster setup in Redhat VMs with 1 Master and 2 Worker nodes.
Steps to be done in Master and all the worker nodes
############################################

Disable Firewall
$ systemctl disable firewalld; systemctl stop firewalld

Disable swap
$ swapoff -a; sed -i '/swap/d' /etc/fstab

Disable SELinux
$ setenforce 0
$ sed -i --follow-symlinks 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux

Configure the local ip tables to see the bridged Traffic

To check >>

$ lsmod | grep br_netfilter

To Load>>

$ modprobe br_netfilter

$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
et.bridge.bridge-nf-call-iptables = 1
EOF

$ sudo sysctl --system

Install Docker container runtime

$ yum install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ yum install docker-ce-20.10.9-3.el7 docker-ce-cli-20.10.9-3.el7 containerd.io docker-compose-plugin -y
$ systemctl enable --now docker

Kubernetes Setup

Add yum repository

$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF
Install Kubernetes components $ yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 Enable and Start kubelet service $ systemctl enable --now kubelet $ systemctl start kubelet Now in the master node , initialise the kubeadm Initialize Kubernetes Cluster $ kubeadm init --apiserver-advertise-address=10.21.12.77 --pod-network-cidr=10.244.0.0/16 `Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.21.12.77:6443 --token s1nd04.1vnerbxxxxxxxxx --discovery-token-ca-cert-hash sha256:d274fe728aded3cxxxxxxxxxxxx
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Deploy Pod network

$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@larkspur1 ~]#

Now Join the worker nodes to the cluster

Login to the worker nodes and execute the below command

$ kubeadm join 10.21.12.77:6443 --token s1nd04.1vnxxxxxxxxxxxx
--discovery-token-ca-cert-hash sha256:d274fe728aded3cxxxxxxxxxxxx
$ kubectl get nodes NAME STATUS ROLES AGE VERSION larkspur1.fyre.ibm.com Ready control-plane,master 20m v1.21.1 larkspur2.fyre.ibm.com Ready 39s v1.21.1 larkspur3.fyre.ibm.com Ready 20s v1.21.1






Comments

Popular posts from this blog

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It i...

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...