Skip to main content

Persistent Volume in K8s Multinode cluster with NFS

nfs - server (10.0.1.9)
############
Amazon machine linux

# yum update
# yum install nfs-utils -y
# systemctl enable nfs-server
# systemctl start nfs-server

# mkdir -p /srv/nfs/k8sdata
# chmod -R 777 /srv/nfs/k8sdata

# vi /etc/exports

/srv/nfs/k8sdata *(rw,no_subtree_check,no_root_squash,insecure)

:wq!

# exportfs -rav

# exportfs -v
/srv/nfs/k8sdata
(rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)

Now in NFS- Client:
#################

Ubuntu 16.04

# sudo apt-get update
# sudo apt-get install nfs-common
# showmount -e 10.0.1.9
Export list for 10.0.1.9:
/srv/nfs/k8sdata *

Testing
-------
# sudo mount -t nfs 10.0.1.9:/srv/nfs/k8sdata /mnt
# root@dn1:~# df -h | grep nfs
10.0.1.9:/srv/nfs/k8sdata  8.0G  1.8G  6.3G  22% /mnt

# umount /mnt

Now in the kubectl terminal issue the pv and pvc create with yamls

cat > 4-pv-nfs.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-pv1
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.0.1.9
    path: "/srv/nfs/k8sdata"

cat > 4-pvc-nfs.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-pv1
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
   
--------------------------------------
Here the catch is that, the storageClassName in pvc yaml file should be same as the storageClassName in pv yaml file:

# kubectl create -f 4-pv-nfs.yaml
persistentvolume/pv-nfs-pv1 created

# kubectl create -f 4-pvc-nfs.yaml
persistentvolumeclaim/pvc-nfs-pv1 created

ubuntu@namenode:~$ kubectl get pv,pvc
NAME                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
persistentvolume/pv-nfs-pv1   1Gi        RWX            Retain           Bound    default/pvc-nfs-pv1   manual                  4m26s

NAME                                STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc-nfs-pv1   Bound    pv-nfs-pv1   1Gi        RWX            manual         6s


Now u can create a deployment with below Volume parameters:

--------------------------------------

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: nginx
  name: nginx-deploy-nfs
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      volumes:
      - name: www
        persistentVolumeClaim:
          claimName: pvc-nfs-pv1
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
       
----------------------------------------

Here the catch is , the ( -name: www ) of VolumeMounts in "containers:" section should be the same as the ( -name: www ) in the "volumes:" section.

ubuntu@namenode:~/kubernetes/yamls$ kubectl create -f 4-nfs-nginx-1.6.yaml
deployment.apps/nginx-deploy-nfs created

ubuntu@namenode:~/kubernetes/yamls$ kubectl get pods -o wide | grep nfs
nginx-deploy-nfs-6fdd5b84cc-s4qfp     1/1     Running   0          35s    10.244.1.35   dn1               

Now check in Dn1 if the nfs is mounted

root@dn1:~# df -h | grep nfs
10.0.1.9:/srv/nfs/k8sdata  8.0G  1.8G  6.3G  22% /var/lib/kubelet/pods/89523633-0a50-43c6-b13a-a96270f0c819/volumes/kubernetes.io~nfs/pv-nfs-pv1


Now expose your deployment and create the nginx-ingress resouce rule for getting this deployment from outside.

# kubectl expose deploy nginx-deploy-nfs --port 80

That is all

Cheers.

Comments

Popular posts from this blog

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Nginx Ingress controller setup in K8S MultiNode Cluster with HA-Proxy as External LB

https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md Pre-requisites: ############### >> K8s cluster setup with 1 Master and 2 Worker nodes. >> Deployed an application with Deployment name "client-sb" >> Also you need to create an HA-proxy server by spinning an Ec2 instance. After login the Ha-proxy server. # yum install haproxy # vi /etc/haproxy/haproxy.cfg delete everything after the global and defaults starting from "Main front-end which proxys to the backend" paste the below code in the end of the file: --------------------- frontend http_front   bind *:80   stats uri /haproxy?stats   default_backend http_back backend http_back   balance roundrobin   server kube 10.0.1.14:80   server kube 10.0.1.12:80 --------------------- # systemctl status haproxy # systemctl enable haproxy # systemctl start haproxy 1. Create a Namespace, a SA, the Default Secret, the Customization Confi...

Running K8s cluster service kubelet with Swap Memory Enabled

For enabling swap memory check the below link : https://jinojoseph.blogspot.com/2019/10/enable-swap-memory-using-swapfile-in.html # sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Add the KUBELET_EXTRA_ARGS line as below: ---------------------------------------- Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS Now kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units # sudo systemctl daemon-reload # sudo systemctl restart kubelet # sudo systemctl status kubelet That is all cheers :p