Integration Of Aws And Terraform

First of all lets talk about what actually the terraform is:-

Basically, Terraform is an Infrastructure Automation tool.

Terraform is one of the very popular Infrastructure tools, it is one of the products of HashiCorp which is an opensource software company based in San Francisco, California.

It is actually a standardised way to define the infrastructure for a variety of the provider Eg: AWS, Azure, GCP, and OpenStack. So it becomes easy for us not to learn the commands of various cloud instead we can use terraform which uses declarative language.

So, Here I tried to do similar thing from terraform:-

Task Description:-

Task 1 : Have to create/launch Application using Terraform

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 .Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

So Firstly provider aws should be provided inside our terraform file. Here we have used concept of profile from aws beacuse we don’t want to let the outer world see our credentials of aws account.

provider “aws”{
region = “ap-south-1”
profile = “mylogin”
}

Now, Creating private key :-

resource “tls_private_key” “example” {
algorithm = “RSA”
rsa_bits = “2048”
}

Creating key pair and storing it into local system

resource “aws_key_pair” “deployer” {
key_name = “deployer-key”
public_key = tls_private_key.example.public_key_openssh
}
resource “local_file” “foo” {
content = tls_private_key.example.private_key_pem
filename = “C:/Users/HP/Downloads/aws_keys/deployer-key.pem”
}

Now creating the security group in which http and ssh protocol, inbound rule is given so that anyone can ssh the ec2 instance and can access the webserver associated to it.

resource “aws_security_group” “allow_SSH_HTTP” {
name = “allow_SSH_HTTP”
description = “Allows SSH and HTTP”
vpc_id = “vpc-c21804aa”

ingress {
description = “SSH”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
description = “HTTP”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]

}
tags = {
Name = “allow_SSH_HTTP”
}
}

Now, creating the ec2 instance with above created key and security group and also we want to install http webserver and also want to do some other configuration inside the instance so there is some provisioners called remote_exec in terraform which provide this service.

resource “aws_instance” “web” {

depends_on = [
local_file.foo,
]

ami = “ami-052c08d70def0ac62”
instance_type = “t2.micro”
key_name = “deployer-key”
security_groups = [“allow_SSH_HTTP”]
connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.example.private_key_pem
host = aws_instance.web.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}
tags = {
Name = “HelloWorld”
}
}

Now you can see apache webserver has been successfully installed inside ec2 instance with the help of remote provisioner :-

Now, Ebs volume has been created so that it can be used by ec2 instance:-

resource “aws_ebs_volume” “myebs” {
availability_zone = aws_instance.web.availability_zone
size = 1
tags = {
Name = “lwhybridebs”
}
}

You can see two volumes come up because one volume is with ec2 instance by default and one we have provided separately:-

Attachment of above created ebs volume to ec2 instance :-

resource “aws_volume_attachment” “AttachVolume” {
device_name = “/dev/sdh”
volume_id = aws_ebs_volume.myebs.id
instance_id = aws_instance.web.id
force_detach = true
}

Printing the ec2 instance public ip:-

output “My_instance_ip” {
value = aws_instance.web.public_ip
}

Now we have to partition the above created ebs volume and then format it, so in order to do so we again have to go inside the ec2 instance by provisoner remote exec and also we have pulled our code from github repo to /var/www/html folder and this whole resource depends on the ebs volume attachment resource because if that fails, it is worthless.

resource “null_resource” “nullremote3” {

depends_on = [
aws_volume_attachment.AttachVolume,
]

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.example.private_key_pem
host = aws_instance.web.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone
https://github.com/Ma9si/LwProject.git /var/www/html/”,
“sudo restorecon -r /var/www/html”
]
}
}

Pulled webpage is looking like:-

With the help of local exec provisioner, we can download the image inside our github repository inside our local machine:-

resource “null_resource” “nullremote2” {
provisioner “local-exec” {
command = “git clone
https://github.com/Ma9si/LwProject.git C:/Users/HP/Documents/myhybridfolder”
}
}

Now, s3 bucket has been created:-

resource “aws_s3_bucket” “mybucket” {
bucket = “mybucket112”

tags = {
Name = “My bucket”
Environment = “Dev”
}
}

If we create bucket from aws gui then public access has been blocked by default but in terraform we have to manually block the public access:-

resource “aws_s3_bucket_public_access_block” “example” {
bucket = aws_s3_bucket.mybucket.id

block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}

Now uploading image inside the bucket:-

resource “aws_s3_bucket_object” “object” {
bucket = aws_s3_bucket.mybucket.bucket
key = “mansi.jpg”
source = “C:/Users/HP/Documents/myhybridfolder/mansi.jpg”
acl = “private”
}

locals {
s3_origin_id = “myS3Origin”
}

Now, cloudfront has been created for the above creates s3 bucket and origin access identity has also been provided to it:-

resource “aws_cloudfront_origin_access_identity” “origin_access_identity” {
comment = “Some comment”
}

resource “aws_cloudfront_distribution” “s3_distribution” {

depends_on = [
aws_s3_bucket.mybucket,
aws_s3_bucket_object.object
]
origin {
domain_name = aws_s3_bucket.mybucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
}

}

enabled = true
is_ipv6_enabled = true
comment = “Some comment”

default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

min_ttl = 0
default_ttl = 3600
max_ttl = 86400
viewer_protocol_policy = “redirect-to-https”
}

restrictions {
geo_restriction {
restriction_type = “whitelist”
locations = [“IN”]
}
}

viewer_certificate {
cloudfront_default_certificate = true
}
}

Also bucket policy for bucket has been updated so that anyone coming on the cloudfront domain can access the image inside s3 bucket:-

data “aws_iam_policy_document” “s3_policy” {
statement {
actions = [“s3:GetObject”]
resources = [“${aws_s3_bucket.mybucket.arn}/*”]

principals {
type = “AWS”
identifiers = [“${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}”]
}
}

statement {
actions = [“s3:ListBucket”]
resources = [“${aws_s3_bucket.mybucket.arn}”]

principals {
type = “AWS”
identifiers = [“${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}”]
}
}
}

resource “aws_s3_bucket_policy” “example” {
bucket = aws_s3_bucket.mybucket.id
policy = data.aws_iam_policy_document.s3_policy.json
}

Now we have the cloudront domain name and we have to update our code inside ec2 instance inside /var/www/html folder so we have to go again inside the instance and update the code:-

resource “null_resource” “nullremote4” {

depends_on = [
aws_cloudfront_distribution.s3_distribution
]

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.example.private_key_pem
host = aws_instance.web.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo sed -i ‘$a <img src =
https://${aws_cloudfront_distribution.s3_distribution.domain_name}/mansi.jpg width = ‘200’ height =’200' />’ /var/www/html/project.html”,
]
}
}

This is done to print the cloud front domain name in the cmd shell:-

output “Cloud_Front_Domain_Name” {
value = aws_cloudfront_distribution.s3_distribution.domain_name
}

Now we finally can see our webpage with the help of local exec and chrome command:-

resource “null_resource” “nullremote5” {

depends_on = [

null_resource.nullremote4,

]

provisioner “local-exec” {

command = “chrome ${aws_instance.web.public_ip}/project.html”

}

}

You can see in above screenshot that in that way output has come of public ip of ec2 instance and cloudfront domain name.

For the above explained code first we have to do terraform init to download some plugins for resources and then terraform apply to apply these resources to the aws.

And if we want to destroy the above whole created resources, we can destroy in one click with terraform destroy command.

My github url in which i have put my above terraform file after doing the above task is:-

Thank You !