Skip to main content

Setup Private Registry for K8s Cluster using Trow Script

Pre-requisites:

K8S cluster setup and running.

If you have not done this before, first check my post for setting the K8s cluster

https://jinojoseph.blogspot.com/2019/10/installing-k8s-as-cluster-using-kubeadm.html

Once you have done that what if you need to build and run your own image?

You're going to need to push your image to a registry that is accessible to Kubernetes. The obvious option is to use the Docker Hub, but what if you want to keep your image private?

The answer: run a registry inside the Kubernetes cluster itself. This way there's no need to worry about hidden costs or pushing to external resources. You can use the default Docker registry for this purpose, but to do this securely requires setting up TLS certificates and manual twiddling. A simpler option is to install the Trow registry via its install script, which will also take care of configuring TLS correctly.


# git clone  https://github.com/ContainerSolutions/trow.git
# cd trow
# ./install.sh
Trow AutoInstaller for Kubernetes
=================================

This installer assumes kubectl is configured to point to the cluster you want to
install Trow on and that your user has cluster-admin rights.

This installer will perform the following steps:

  - Create a ServiceAccount and associated Roles for Trow 
  - Create a Kubernetes Service and Deployment
  - Request and sign a TLS certificate for Trow from the cluster CA
  - Copy the public certificate to all nodes in the cluster
  - Copy the public certificate to this machine (optional)
  - Register a ValidatingAdmissionWebhook (optional) 

If you're running on GKE, you may first need to give your user cluster-admin
rights:

  $ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=

Where  is your user, normally the e-mail address you use with your GKE 
account.

Do you want to continue? (y/n) y

Starting Kubernetes Resources
serviceaccount/trow created
role.rbac.authorization.k8s.io/trow created
clusterrole.rbac.authorization.k8s.io/trow created
rolebinding.rbac.authorization.k8s.io/trow created
clusterrolebinding.rbac.authorization.k8s.io/trow created
deployment.apps/trow-deploy created
service/trow created

Approving certificate. This may take some time.
..................

Copying certs to nodes
job.batch/copy-certs-34454354-0f89-4781-ba02-465656 created
job.batch/copy-certs-34543543-1deb-4a64-ab00-45656 created

Do you wish to install certs on this host and configure /etc/hosts to allow access from this machine? (y/n) y

Copying cert into Docker
This requires sudo privileges
-----BEGIN CERTIFICATE-----
MIIEzjCCA7agAwIBAgIUCABTopWBLO6D1RpXH5Rk9M6kzxswDQYJKoZIhvcNAQEL
BQAwFTETMBEGA1UEAxMKa3ViZXJuZXRlczAeFw0xOTEwMTYwNjQwMDBaFw0yMDEw
msgmYvMUKsnLyzAMs85GQecC74ELD0uU8NnvAoRa1a8NAfIWmpXLhYQQiU0JYnbN
xJCEYXSVCQIDAQABo4IBDjCCAQowDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoG
CCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFAEgnXOeOnJA9Kbo/bp3
DZuvu6kOMIG1BgNVHREEga0wgaqCInRyb3cua3ViZS1wdWJsaWMuc3ZjLmNsdXN0
==
-----END CERTIFICATE-----
Successfully copied cert
Adding entry to /etc/hosts for trow.kube-public

No external IP listed in "kubectl get nodes -o wide"
Trying minikube
Not minikube.
Trying internal IP which may work for local clusters e.g. microk8s

Exposing registry via /etc/hosts
This requires sudo privileges
10.0.1.12 trow.kube-public # added for trow registry

Successfully configured localhost

Do you want to configure Trow as a validation webhook (NB this will stop external images from being deployed to the cluster)? (y/n) n

# Testing:
---------
ubuntu@namenode:~$ sudo docker pull nginx:latest
latest: Pulling from library/nginx
b8f262c62ec6: Pull complete 
e9218e8f93b1: Pull complete 
7acba7289aa3: Pull complete 
Digest: sha256:aeded0f2d2b11fcc7fcadc16ccd1
Status: Downloaded newer image for nginx:latest
ubuntu@namenode:~$ 
ubuntu@namenode:~$ 
ubuntu@namenode:~$ sudo docker tag nginx:latest trow.kube-public:31000/mynginx:test
ubuntu@namenode:~$ 
ubuntu@namenode:~$ sudo docker push trow.kube-public:31000/mynginx:test
The push refers to repository [trow.kube-public:31000/mynginx]
509a5ea4aeeb: Pushed 
3bb51901dfa3: Pushed 
2db44bce66cd: Pushed 
test: digest: sha256:dbdfa744f53d596f7bae34540 size: 928
ubuntu@namenode:~$ 
ubuntu@namenode:~$ 
ubuntu@namenode:~$ kubectl run trow-nginx --image=trow.kube-public:31000/mynginx:test
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/trow-nginx created
ubuntu@namenode:~$ 
ubuntu@namenode:~$ 
ubuntu@namenode:~$ kubectl get deploy
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
trow-nginx   1/1     1            1           38s


Comments

Popular posts from this blog

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It i...

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...