Skip to main content

AWS Command line script for creating basic VPC environment.

#!/bin/bash
###################################
#Created on 27-Jun-2018
#Purpose : This script will Create a VPC/Subnet/Routetable/Internetgateway/Natgateway and associate them to the corresponding subnets
#Modified on : 13-Sep-2018
####################################

vpcName="My-VPC"
vpcCidrBlock="10.0.0.0/16"
PubsubNetCidrBlock="10.0.1.0/24"
PrvsubNetCidrBlock="10.0.2.0/24"
pubAvailabilityZone="ap-northeast-1a"
prvAvailabilityZone="ap-northeast-1c"
pubSubnetName="PublicSubnet-My"
prvSubnetName="PrivateSubnet-My"
PubRouteTableName="MyPublicRoute"
PrvRouteTableName="MyPrivateRoute"
destinationCidrBlock="0.0.0.0/0"


#Create a VPC with a 10.0.0.0/16 CIDR block.
aws_response=$(aws ec2 create-vpc --cidr-block "$vpcCidrBlock" --output json)
vpcId=$(echo -e "$aws_response" |  /usr/bin/jq '.Vpc.VpcId' | tr -d '"')


#name the vpc
aws ec2 create-tags --resources "$vpcId" --tags Key=Name,Value="$vpcName"

#create internet gateway
gateway_response=$(aws ec2 create-internet-gateway --output json)
gatewayId=$(echo -e "$gateway_response" |  /usr/bin/jq '.InternetGateway.InternetGatewayId' | tr -d '"')

#name the internet gateway
aws ec2 create-tags --resources "$gatewayId" --tags Key=Name,Value=My-Gateway

#attach gateway to vpc
attach_response=$(aws ec2 attach-internet-gateway --internet-gateway-id "$gatewayId"  --vpc-id "$vpcId")

#create Public subnet for vpc with /24 cidr block
pub_subnet_response=$(aws ec2 create-subnet --cidr-block "$PubsubNetCidrBlock" --availability-zone "$pubAvailabilityZone" --vpc-id "$vpcId" --output json)
pubsubnetId=$(echo -e "$pub_subnet_response" |  /usr/bin/jq '.Subnet.SubnetId' | tr -d '"')

#name the Public subnet
aws ec2 create-tags --resources "$pubsubnetId" --tags Key=Name,Value="$pubSubnetName"

#enable public ip on public subnet
modify_response=$(aws ec2 modify-subnet-attribute --subnet-id "$pubsubnetId" --map-public-ip-on-launch)

#create Private subnet for vpc with /24 cidr block
prv_subnet_response=$(aws ec2 create-subnet --cidr-block "$PrvsubNetCidrBlock" --availability-zone "$prvAvailabilityZone" --vpc-id "$vpcId" --output json)
prvsubnetId=$(echo -e "$prv_subnet_response" |  /usr/bin/jq '.Subnet.SubnetId' | tr -d '"')

#name the Private subnet
aws ec2 create-tags --resources "$prvsubnetId" --tags Key=Name,Value="$prvSubnetName"


#create public route table for vpc
route_table_response=$(aws ec2 create-route-table --vpc-id "$vpcId" --output json)
pubrouteTableId=$(echo -e "$route_table_response" |  /usr/bin/jq '.RouteTable.RouteTableId' | tr -d '"')

#name the public route table
aws ec2 create-tags --resources "$pubrouteTableId" --tags Key=Name,Value="$PubRouteTableName"

#add route for the internet gateway
route_response=$(aws ec2 create-route --route-table-id "$pubrouteTableId" --destination-cidr-block "$destinationCidrBlock" --gateway-id "$gatewayId")


#Associate public subnet to public route table
associate_response=$(aws ec2 associate-route-table --subnet-id "$pubsubnetId" --route-table-id "$pubrouteTableId")


#create private route table for vpc
prv_route_table_response=$(aws ec2 create-route-table --vpc-id "$vpcId" --output json)
prvrouteTableId=$(echo -e "$prv_route_table_response" |  /usr/bin/jq '.RouteTable.RouteTableId' | tr -d '"')

#name the public route table
aws ec2 create-tags --resources "$prvrouteTableId" --tags Key=Name,Value="$PrvRouteTableName"


#Associate private subnet to private route table
prv_associate_response=$(aws ec2 associate-route-table --subnet-id "$prvsubnetId" --route-table-id "$prvrouteTableId")


#Allocate Elastic ip for NatGateway.
aws ec2 allocate-address --domain vpc

#Create Nategateway with association to the above created Elastic IP
aws ec2 create-nat-gateway --subnet-id "$prvsubnetId" --allocation-id eipalloc-04f4f6fdff6cefdc8

Comments

Popular posts from this blog

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Nginx Ingress controller setup in K8S MultiNode Cluster with HA-Proxy as External LB

https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md Pre-requisites: ############### >> K8s cluster setup with 1 Master and 2 Worker nodes. >> Deployed an application with Deployment name "client-sb" >> Also you need to create an HA-proxy server by spinning an Ec2 instance. After login the Ha-proxy server. # yum install haproxy # vi /etc/haproxy/haproxy.cfg delete everything after the global and defaults starting from "Main front-end which proxys to the backend" paste the below code in the end of the file: --------------------- frontend http_front   bind *:80   stats uri /haproxy?stats   default_backend http_back backend http_back   balance roundrobin   server kube 10.0.1.14:80   server kube 10.0.1.12:80 --------------------- # systemctl status haproxy # systemctl enable haproxy # systemctl start haproxy 1. Create a Namespace, a SA, the Default Secret, the Customization Confi...

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...