Skip to main content

How to Build and Deploy a Spring Boot Java application with Docker & Kubernetes


Pre-requisites:
Install Docker client/server
Install K8s cluster.

In the K8s Master Node.

git clone https://github.com/spring-guides/gs-spring-boot-docker


cd gs-spring-boot-docker/complete

./mvnw install -e -X # This will create a .jar file in the newely created target directory.

sudo docker build -t image-sb1 -f Dockerfile .

sudo docker images
sudo docker tag image-sb1 trow.kube-public:31000/myrepo # Tag the new image with your private registry (Optional)
sudo docker push trow.kube-public:31000/myrepo  # Push the new image to your local private registry (Optional)

kubectl run image-sb --image=trow.kube-public:31000/myrepo --port=8080

kubectl expose deployment/image-sb --type="NodePort" --port 8080

Testing:
----------------------------------------------------------------------
ubuntu@namenode:~$ kubectl get svc
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes    ClusterIP   10.96.0.1               443/TCP          2d2h
image-sb   NodePort    10.102.38.118           8080:30148/TCP   18m


ubuntu@namenode:~$ curl 10.102.38.118:8080
Hello Docker World

----------------------------------------------------------------------

ubuntu@namenode:~$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
image-sb-76448df777-mxwqq   1/1     Running   0          72m

The below command will get you inside the k8s pod image-sb-76448df777-mxwqq

ubuntu@namenode:~$ kubectl exec -ti image-sb-76448df777-mxwqq sh
#  wget localhost:8080
Connecting to localhost:8080 (127.0.0.1:8080)
index.html           100% |************************************************************************************************************************|    18  0:00:00 ETA
#
----------------------------------------------------------------------

If you are inside an aws Ec2 instance then whitelist your Public Ip for the NodePort 30148 in your k8s master node security group.


Take the url :30148 in the browser and you will see "Hello Docker World"

That is all .. Cheers :-)


Comments

Popular posts from this blog

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Nginx Ingress controller setup in K8S MultiNode Cluster with HA-Proxy as External LB

https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md Pre-requisites: ############### >> K8s cluster setup with 1 Master and 2 Worker nodes. >> Deployed an application with Deployment name "client-sb" >> Also you need to create an HA-proxy server by spinning an Ec2 instance. After login the Ha-proxy server. # yum install haproxy # vi /etc/haproxy/haproxy.cfg delete everything after the global and defaults starting from "Main front-end which proxys to the backend" paste the below code in the end of the file: --------------------- frontend http_front   bind *:80   stats uri /haproxy?stats   default_backend http_back backend http_back   balance roundrobin   server kube 10.0.1.14:80   server kube 10.0.1.12:80 --------------------- # systemctl status haproxy # systemctl enable haproxy # systemctl start haproxy 1. Create a Namespace, a SA, the Default Secret, the Customization Confi...

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...