[AWS] Building a Scalable, Secure & Highly Available 3-tier Web App in the Cloud using Terraform

Project Overview

⭐ GitHub Repo For This Project ⭐

This project focuses on designing and deploying a secure, highly available and scalable 3-tier web application using Terraform. The architecture leverages Infrastructure as Code (IaC) principles to automate the provisioning of resources across AWS, ensuring consistency and efficiency. The 3-tier architecture is divided into three layers: the web-tier (front-end), application-tier (business logic), and database-tier (data management) tiers, each deployed within its respective subnet and security group to enhance security and modularity.

Objectives:

  • Automated Infrastructure Provisioning: Utilize Terraform to automate the deployment and configuration of all necessary infrastructure components, including VPCs, subnets, EC2 instances, load balancers, security groups, and RDS instances.
  • Scalable and Highly Available Architecture: Implement auto-scaling groups and load balancing to ensure the application can handle varying levels of traffic while maintaining availability.
  • Security Best Practices: Security Best Practices: Enforce strict security measures by isolating tiers using subnets and security groups, and by implementing encryption at rest and in transit where applicable.
  • Modular Design: Structure Terraform code into reusable modules, allowing for easy maintenance and scalability of the infrastructure.

Architecture Components:

1. VPC (Virtual Private Cloud): The network foundation that hosts all the resources, divided into public and private subnets across multiple availability zones for redundancy.

2. Presentation (Web) Tier:

  • AWS Elastic Load Balancer (ELB): Distributes incoming traffic across multiple EC2 instances to ensure high availability. In this case we will be using an Application Load Balancer (ALB).
  • EC2 Instances: Hosts the web server (Apache) that serves static content and forwards dynamic requests to the application tier.

3. Application Tier:

  • EC2 Instances: Runs the application logic (Python, Node.js) in an auto-scaling group to dynamically adjust to traffic demands.
  • Security Group: Restricts access to the application tier, allowing only traffic from the presentation tier.

4. Database Tier:

  • Amazon RDS (Relational Database Service): Manages the database with multi-AZ deployment for fault tolerance. We will be using MySQL
  • Security Group: Limits access to the database tier, ensuring only the application tier can communicate with the database.

Deliverables

  • Terraform Configuration Files:Includes main configuration, variable definitions, and module files.
  • Deployment Documentation (This Document):Step-by-step guide on how to deploy the infrastructure using Terraform.
  • Security Policies:Documentation on the security measures implemented within the architecture.

Why the 3-tiers?

This architectural approach effectively addresses scalability, availability, and security concerns by distributing the application across multiple Availability Zones (AZs) and organizing it into three distinct layers, each with its own role. If an AZ experiences downtime, the application can automatically shift resources to another AZ without disrupting the other layers. Each layer is protected by its own security group, ensuring that only the necessary traffic for its specific functions is allowed.

  • Web/Presentation Layer: Contains the components that users interact with, including web servers and the frontend interface.
  • Application Layer: Manages the backend processes and application logic required for data processing and functionality.
  • Data Layer:Stores and manages the application's data, typically housing databases.

Entire Architecture

  • One VPC
  • Two Availability Zones (Auto Created)
  • Two Public Subnets: One public subnet in each of our Availability Zones (AZs)
  • Four Private Subnets: Two private subnets in each of our Availability Zones (AZs). Private Subnet 1 in AZ 1, Private Subnet 2 in AZ 1 & Private Subnet 1 in AZ 1, Private Subnet 2 in AZ 2
  • One Internet Gateway: Network component that allows our public instances (network resources deployed in public subnets) to receive and send traffic to the internet.
  • One Public Route Table: The route table that will direct traffic from public instances through the internet gateway
  • One NAT Gateway: Network component that allows our private instances to connect to the internet while restricting incoming requests from the public internet from reaching our private instaces.
  • One Private Route Table: The route table that will direct traffic from private instances through the NAT gateway
  • Two Application Load Balancers: A web-tier ALB that will distribute traffic amongst the web-tier EC2 instances and an application-tier ALB that will distribute traffic amongst the application-tier EC2 instances.
  • Two Auto Scaling Groups: A feature that will automatically adjust the number of EC2 instances in our web-tier and application-tier based on certain policies.
  • Six Security Groups: One security group that will control inbound & outbound traffic to our web-tier, another security group for our application-tier, another security group for our database-tier, a security group for our web-tier ALB, a security group for our application-tier ALB and the final security group for our bastion-host EC2 instance.
Architectural Diagram
Description of the image

Implementation

Configuring Terraform

To kick off this project we need to ensure that we have installed Terraform and done so correctly, I would suggest following this link on How to Install Terraform on Windows. Once you have successfully installed Terraform on your windows system you can then proceed to open up a terminal, cmd or powershell will work fine, remember to run the terminal as an administrator. Once you have the terminal opened up, you can simply type in the following command:

terraform plan
If you have successfully installed Terraform then you will see the following response:

Description of the image
Creating AWS Access Keys

We now need to connect our AWS account to Terraform in order to be able to create and configure resources in our account. For this we are going to need ACCESS KEYS and SECRET ACCESS KEYS from an IAM User we are going to create. To obtain those keys you would need to login into your AWS account through your browser, once logged in you need to navigate to the ‘IAM’ service. On the left panel, you should see ‘Users’, select ‘Users’ and select ‘Create User’.

pay attention to the order of steps

After clicking ‘Create User’ you will be asked to give your user a name, you can name it something like ‘Terraform-CLI-Admin’ or something, doesn’t really matter much. Moving on to the ‘Set Permissions’ section of creating your IAM User, be sure to select ‘Attach policies directly’ and then search for the ‘AdministratorAccess’ policy and click the little ‘+’ sign next to it to select it. Scroll to the bottom of the page and click Next, and on the next page scroll to the bottom of the page and select ‘Create User’.

pay attention to the order of steps

Once your IAM User has successfully been created you need to click on the IAM User you just created and then locate the ‘Security Credentials’ tab, scroll down and select ‘Create Access Key’.

pay attention to the order of steps

After clicking create access key, you will be presented with a page in which you must select the access key ‘use case’ for this project we will be selecting the ‘Command Line Interface (CLI)’ option.

pay attention to the order of steps

Select ‘Next’ and the select next again to finally retrieve the ACCESS KEYS AND SECRET ACCESS KEYS.

pay attention to the order of steps

Copy the Access Key and Secret Access Key. Now we need to navigate to the directory containing the ‘terraform_1.9.x_windows_386’ folder and DO NOT go into the ‘terraform_1.9.x_windows_386’ folder. Once here create a new folder called ‘.aws’, next you can open up ‘Notepad’ and use the following syntax when pasting your keys.

[default]
aws_access_key_id = “AKIA6GBMGYXXXXXXXXX”
aws_secret_access_key = “PiRykPlXXuhorYGzv7YuNbSY0JXXxxXxxXXxxxX”

Now an important step is after you have pasted your keys in the correct format in Notepad, you must save the file with following name ‘credentials.’ Take not of the ‘.’ After the word credential, we do this to ensure the file is NOT saved as a ‘.txt’ file. We need the file to have NO EXTENSION. Take note of path where the file is being saved and well as the File name and Save as type section should be set to ALL FILES, and then click Save.

pay attention to the order of steps

Next you can go over to your terminal and navigate to the directory CONTAINING the ‘terraform_1.9.x_windows_386’ folder and DO NOT go into the ‘terraform_1.9.x_windows_386’ folder. Once here you can simply run the

terraform init
command and if you have set things up right you should get the following message.

pay attention to the order of steps

The terraform init command initializes a Terraform working directory and sets up the environment for further Terraform operations like plan and apply.

This output means that Terraform recognizes and accepts our access keys and you may now start creating your configuration files. Terraform configuration files are simply text files with a ‘.tf’ extension in which we define resources and infrastructure that we would like to create and manage.

I found it better to create and manage all your configuration files within the ‘.aws’ folder in which we saved our aws credentials. In your terminal navigate into the ‘.aws’ folder using the following command

cd aws

pay attention to the order of steps

Once inside this folder you can run ‘terraform init’ again just for good measure.

Creating First Terraform Configuration File

Next, you need to go into your code editor which could be either Notepad++, VS Code, PyCharm etc. For this project I will be using VS Code. Open up VS Code and create a new file. You can name your first configuration file as whatever you like but for this, I named the files in the following manner ‘Network-Resources-1.tf’. ‘Network-Resources-1.tf’ will be our first configuration file in which we will define our provider, our non-default vpc, all our subnets, an internet gateway, a nat-gateway and our route tables.

  • Provider: Code block defining the cloud platform that we will be working with (aws), defining this block allows Terrafrom to manage and provision resources on the cloud platform.
  • VPC:We will be defining 1 non-default VPC, this will allow us to create and configure the VPC from the ground up rather than having things already setup for you. All our resources will be deployed within this VPC
  • Subnets: Our subnets are simply segmented pieces of a larger network; in this case the larger network is our VPC. We will have public and private subnets in this project. For this we have 2 public subnets (Availability Zone 1 & Availability Zone 2) and 4 private subnets (2 in Availability Zone 1 & 2 in Availability Zone 2).
  • Internet Gateway: Network component in a VPC that allows public subnets to be able to send and receive traffic from the internet. This will give our 2 public subnets access to the internet.
  • NAT Gateway: Network component that allows instances in private subnets to send outbound traffic to the public internet but restricts inbound traffic for security purposes. This will give our private instances (resources deployed in private subnets) access to the internet.
  • Route Tables: A set of rules that determines how traffic is directed within our VPC. It specifies how traffic is routed between subnets and other networks. Our public route table will route our public subnets to the internet gateway and our private subnet will route our private subnets to the nat-gateway.

Open up your code editor, create a file named ‘Network-Resources-1.tf’ and input the following code to define all our required networking resources.


# Specify the AWS provider and region provider "aws" { region = "us-east-1" } # Create a VPC with the specified CIDR block resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" } # Create a public subnet in availability zone us-east-1a resource "aws_subnet" "public_subnet1_AZ1" { vpc_id = aws_vpc.main.id cidr_block = "10.0.1.0/24" availability_zone = "us-east-1a" } # Create a public subnet in availability zone us-east-1b resource "aws_subnet" "public_subnet1_AZ2" { vpc_id = aws_vpc.main.id cidr_block = "10.0.4.0/24" availability_zone = "us-east-1b" } # Create a private subnet in availability zone us-east-1a resource "aws_subnet" "private_subnet1_AZ1" { vpc_id = aws_vpc.main.id cidr_block = "10.0.2.0/24" availability_zone = "us-east-1a" } # Create a private subnet in availability zone us-east-1b resource "aws_subnet" "private_subnet1_AZ2" { vpc_id = aws_vpc.main.id cidr_block = "10.0.5.0/24" availability_zone = "us-east-1b" } # Create a second private subnet in availability zone us-east-1a resource "aws_subnet" "private_subnet2_AZ1" { vpc_id = aws_vpc.main.id cidr_block = "10.0.3.0/24" availability_zone = "us-east-1a" } # Create a second private subnet in availability zone us-east-1b resource "aws_subnet" "private_subnet2_AZ2" { vpc_id = aws_vpc.main.id cidr_block = "10.0.6.0/24" availability_zone = "us-east-1b" } # Create an Internet Gateway for the VPC resource "aws_internet_gateway" "igw_1" { vpc_id = aws_vpc.main.id } # Create a route table for public subnets resource "aws_route_table" "public_route_table" { vpc_id = aws_vpc.main.id # Define a route to allow outbound traffic to the internet route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.igw_1.id } } # Associate the public route table with the first public subnet resource "aws_route_table_association" "public_subnet1_AZ1_rt_association" { subnet_id = aws_subnet.public_subnet1_AZ1.id route_table_id = aws_route_table.public_route_table.id } # Associate the public route table with the second public subnet resource "aws_route_table_association" "public_subnet1_AZ2_rt_association" { subnet_id = aws_subnet.public_subnet1_AZ2.id route_table_id = aws_route_table.public_route_table.id } # Allocate an Elastic IP for the NAT Gateway resource "aws_eip" "nat_eip" { domain = "vpc" } # Create a NAT Gateway in the first public subnet resource "aws_nat_gateway" "nat_gateway_1" { allocation_id = aws_eip.nat_eip.id subnet_id = aws_subnet.public_subnet1_AZ1.id } # Create a route table for private subnets resource "aws_route_table" "private_route_table" { vpc_id = aws_vpc.main.id # Define a route to allow private subnets to access the internet through the NAT Gateway route { cidr_block = "0.0.0.0/0" gateway_id = aws_nat_gateway.nat_gateway_1.id } } # Associate the private route table with the first private subnet resource "aws_route_table_association" "private_subnet_AZ1_rt_association" { subnet_id = aws_subnet.private_subnet1_AZ1.id route_table_id = aws_route_table.private_route_table.id } # Associate the private route table with the second private subnet resource "aws_route_table_association" "private_subnet1_AZ2_rt_association" { subnet_id = aws_subnet.private_subnet1_AZ2.id route_table_id = aws_route_table.private_route_table.id }

I have added comments above each code to block to help explain what each code block is doing. You can now save your first configuration file. We will now go back to the terminal to run the ‘terraform plan’ command to see what terraform is going to provision in our AWS account based on our first configuration file.

pay attention to the order of steps

If we take a look at the line highlighted with an orange border, we can see that according to our configuration file Terraform sees 16 resources to be added. The output highlighted in red simply shows the resource name and type that Terraform reads from your configuration file.

At this point you could decide to run the next command which is terraform apply this command actually executes our configuration and provisions the resources into your AWS account, after which you can go into your AWS management console and view the newly created resources managed by Terraform.

Creating Second Terraform Configuration File

For this project we will write all of our configuration files before finally applying the configuration to be deployed in our aws account. Since we ran into no errors (you shouldn’t either), then we can go on to create the next round of network resources in a new configuration file called ‘Network-Resources-2.tf’, which this time will contain all of our security groups.

  • Security Group (SG): acts as a virtual firewall for EC2 instances, controlling inbound and outbound traffic based on defined rules. It allows us to specify allowed protocols, ports, and IP ranges to control access to our instances. We will have a total of 6 security groups.
    • Web-Tier Security Group: This security group will control inbound and outbound traffic to the EC2 instances deployed in our public subnets.
    • Application-Tier Security Group: This security group will control inbound and outbound traffic to the EC2 instances deployed in Private Subnet 1 of Availability Zone 1 and Private Subnet 1 of Availability Zone 2.
    • Database-Tier Security Group: This security group will control inbound and outbound traffic to Private Subnet 2 of Availability Zone 1 (where our RDS database instance is deployed) and Private Subnet 2 of Availability Zone 2.
    • Bastion-Host Security Group: This security group will control inbound and outbound traffic to our bastion-host.
    • Web-Tier Application Load Balancer SG:This security group will control inbound and outbound traffic to the application load balancer that will be deployed to balance the request load going into the EC2 instances in the web tier (public subnets).
    • Application-Tier Application Load Balancer SG: This security group will control inbound and outbound traffic to the application load balancer that will be deployed to balance the request load going into the EC2 instances deployed in the application tier (private subnet 1 of AZ 1 & private subnet 1 of AZ 2).

Below is the terraform code to create the above-mentioned security group resources.

# Security Group for the web tier (EC2 instances running web servers)
resource "aws_security_group" "web_tier_sg"{
    name = "web_sg"
    vpc_id = aws_vpc.main.id

    # Allow incoming HTTP traffic from any IP address
    ingress{
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    # Allow all outgoing traffic
    egress{
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

# Security Group for the application tier (EC2 instances running application servers)
resource "aws_security_group" "app_tier_sg"{
    name = "app_sg"
    vpc_id = aws_vpc.main.id

    # Allow incoming HTTP traffic only from the web tier security group
    ingress{
        from_port = 80
        to_port = 80
        protocol = "tcp"
        security_groups = [aws_security_group.web_tier_sg.id] # Allows incoming traffic from web tier SG only
    }

    # Allow all outgoing traffic
    egress{
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

# Security Group for the database tier (RDS instances)
resource "aws_security_group" "db_tier_sg"{
    name = "db_sg"
    vpc_id = aws_vpc.main.id

    # Allow incoming MySQL traffic only from the application tier security group
    ingress{
        from_port = 3306
        to_port = 3306
        protocol = "tcp"
        security_groups = [aws_security_group.app_tier_sg.id]
    }

    # Allow all outgoing traffic
    egress{
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

# Security Group for the bastion host (used for SSH access)
resource "aws_security_group" "bastion_host_sg"{
    name = "bastion_sg"
    vpc_id = aws_vpc.main.id

    # Allow incoming SSH traffic from a specific IP address
    ingress{
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["xx.xxx.x.xxx/32"]  #replace this IP address with your IP address.
    }

    # Allow all outgoing traffic
    egress{
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

# Security Group for the web tier ALB
resource "aws_security_group" "web_tier_alb_sg"{
    name = "web_alb_sg"
    vpc_id = aws_vpc.main.id

    # Allow incoming HTTP traffic from any IP address
    ingress{
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    # Allow all outgoing traffic
    egress{
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

# Security Group for the application tier ALB
resource "aws_security_group" "app_tier_alb_sg"{
    name = "app_alb_sg"
    vpc_id = aws_vpc.main.id

    # Allow incoming HTTP traffic only from the web tier ALB security group
    ingress{
        from_port = 80
        to_port = 80
        protocol = "tcp"
        security_groups = [aws_security_group.web_tier_alb_sg.id]
    }

    # Allow inbound traffic from web-tier security group on health check port (if different)
    ingress{
        from_port = 8080
        to_port = 8080
        protocol = "tcp"
        security_groups = [aws_security_group.web_tier_alb_sg.id]
    }

    # Allow all outgoing traffic
    egress{
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}
                  

Mind you Terraform looks at all our configuration files as 1 big configuration file, as long as they are all in the same directory. After running the terraform plan command on the terminal, we get the following output.

pay attention to the order of steps

We can see that Terraform has discovered more resources to be added to our plan.

pay attention to the order of steps

Previously Terraform only had 16 resources to add, but now it has increased to 22. We will continue to plan our Terraform deployment until we have completely defined all of our architecture.

Creating Third Terraform Configuration File

We can now go on to create the next round of network resources in a new configuration file called ‘Network-Resources-3.tf’, which this time will contain our EC2 instances, a bastion-host, an RDS instance, a database subnet group.

  • EC2 Instances: are virtual servers provided by AWS that allow us to run applications and services on the cloud. They can be customized with different instance types based on computing power, memory, and storage requirements, and you can choose an operating system and its configurations. We will deploy 4 EC2 instances, 1 EC2 in Public Subnet 1 of Availability Zone 1, 1 EC2 in Public Subnet of Availability Zone 2, 1 EC2 in Private Subnet 1 of Availability Zone 1, and 1 more EC2 instance in Private Subnet 1 of Availability Zone 2.
  • Bastion-Host: is a special EC2 instance used to securely access resources in private subnets. It acts as a gateway, allowing SSH connections from the internet to other instances in private subnets, while minimizing exposure and maintaining security. We will deploy 1 bastion host in the Public Subnet 1 of Availability Zone 1.
  • RDS Database Instance: a managed relational database service provided by AWS. It simplifies database management tasks and supports multiple database engines like MySQL, PostgreSQL, and SQL Server. It operates within a specified VPC and offers automated maintenance, monitoring, and high availability options. We will have 1 RDS database instance in Private Subnet 2 of Availability Zone 1.
  • Database Subnet Group: a collection of subnets within a VPC that RDS uses to place its database instances. It ensures that our database instances are distributed across multiple Availability Zones for high availability and fault tolerance. We will have only 1 database subnet group and register Private Subnet 2 of Availability Zone 1 and Private Subnet 2 of Availability Zone 2 to the database subnet group.

So now we will have to create our third terraform configuration file and name it ‘Network-Resources-3.tf’. Below is the code to create the above-mentioned resources along with some code block comments to shed more light on what is being achieved in each code block.

# Define a bastion host instance for secure access to private instances
resource "aws_instance" "bastion_host" {
    ami = "ami-0427090fd1714168b"                      			
    instance_type = "t2.micro"                        			
    subnet_id = aws_subnet.public_subnet1_AZ1.id       		
    associate_public_ip_address = true               			 
    security_groups = [aws_security_group.bastion_host_sg.id] 	
    key_name  = aws_key_pair.bastion_host_key.key_name	
}

# Define the first web tier EC2 instance
resource "aws_instance" "web_ec2_1" {
    ami = "ami-0427090fd1714168b"                      			
    instance_type = "t2.micro"                       			 
    subnet_id = aws_subnet.public_subnet1_AZ1.id      	 	
    associate_public_ip_address = true               		 	
    security_groups = [aws_security_group.web_tier_sg.id]	
    key_name  = aws_key_pair.web_tier_ec2_1_key.key_name 
}

# Define the second web tier EC2 instance
resource "aws_instance" "web_ec2_2" {
    ami = "ami-0427090fd1714168b"                      			
    instance_type = "t2.micro"                        			
    subnet_id = aws_subnet.public_subnet1_AZ2.id       		
    associate_public_ip_address = true                			
    security_groups = [aws_security_group.web_tier_sg.id] 	
}

# Define the first application tier EC2 instance
resource "aws_instance" "app_ec2_1" {
    ami = "ami-0427090fd1714168b"                      			
    instance_type = "t2.micro"                       			 
    subnet_id = aws_subnet.private_subnet1_AZ1.id      		
    security_groups = [aws_security_group.app_tier_sg.id] 
    key_name  = aws_key_pair.app_tier_ec2_1_key.key_name	
}

# Define the second application tier EC2 instance
resource "aws_instance" "app_ec2_2" {
    ami = "ami-0427090fd1714168b"                     			 
    instance_type = "t2.micro"                       			 
    subnet_id = aws_subnet.private_subnet1_AZ2.id      		
    security_groups = [aws_security_group.app_tier_sg.id] 	
}

# Define a DB subnet group for RDS to use for placing database instances
resource "aws_db_subnet_group" "db" {
    name = "rds_db_subnet_group_1"                   			 
    subnet_ids = [aws_subnet.private_subnet2_AZ1.id, aws_subnet.private_subnet2_AZ2.id] 
}

# Define an RDS database instance
resource "aws_db_instance" "rds_db" {
    identifier = "mydbinstance"                       				
    allocated_storage = 20                           				
    engine = "mysql"                                				 
    engine_version = "8.0.32"                         				
    instance_class = "db.t3.micro"                   				 
    username = "super_rds1"                            				
    password = "defaultpassword"                     				 
    parameter_group_name = "default.mysql8.0"         		
    skip_final_snapshot = true                        		
    db_subnet_group_name = aws_db_subnet_group.db.name 	
    vpc_security_group_ids = [aws_security_group.db_tier_sg.id] 		
    multi_az = false                                 	
    publicly_accessible = false                       			
    storage_type = "gp2"                            					 
    apply_immediately = true                         				
}
                    

At this point we can go back to the terminal and run the terraform plan command to see what Terraform thinks about our new configuration file.

pay attention to the order of steps

As we can see, the number of objects that Terraform has planned out, has increased once again, and not because of the number of times that we run the ‘terraform plan’ command, but because of the new added resources in our multiple configuration files.

Creating Fourth Terraform Configuration File

Moving on, we will now be defining the code blocks for our next batch of network resources in a new configuration file called ‘Network-Resources-4.tf’. In this new configuration file, we will be using Terraform to create all of our application load balancers along with their target groups, listeners and attachments.

  • Application Load Balancer (ALB): a service that distributes or balances the load of incoming application traffic across multiple targets, such as EC2 instances and IP addresses, in multiple availability zones.
    • Web-Tier Application Load Balancer: This is the ALB that will be deployed to balance the load of application traffic coming into the web-tier EC2 instances.
    • Application-Tier Application Load Balancer: This is the ALB that will be deployed to balance the load of application traffic coming into the application-tier EC2 instances.

  • Application Load Balancer Target Group: a collection of targets, such as EC2 instances or IP addresses, that receive traffic from the load balancer.
  • Application Load Balancer Listener: a process that checks for connection requests using the configured protocol and port (HTTP traffic on port 80). Listeners are configured with rules that determine how the load balancer routes the requests to the registered targets inside the target groups.
  • Application Load Balancer Target Group Attachment: target group attachments are a process of linking targets, such as EC2 instances, with a target group. These attachments ensure that the load balancer can distribute incoming traffic to the registered targets based on the defined routing rules.

Below is the Terraform code used to create the above-mentioned resources.

# Creating The Web-Tier ALB
resource "aws_lb" "web_tier_alb" {
    internal = false
    load_balancer_type = "application"
    security_groups = [aws_security_group.web_tier_alb_sg.id]
    subnets = [aws_subnet.public_subnet1_AZ1.id, aws_subnet.public_subnet1_AZ2.id]
    enable_deletion_protection = false
    enable_http2 = true
}

# Creating The Web-Tier ALB Target Group
resource "aws_lb_target_group" "web_tier_alb_target_group" {
    name = "web-tg"
    port = 80
    protocol = "HTTP"
    vpc_id = aws_vpc.main.id
}

# Creating The Web-Tier ALB Listener
resource "aws_lb_listener" "web_tier_alb_listener" {
    load_balancer_arn = aws_lb.web_tier_alb.arn
    port = 80
    protocol = "HTTP"

    default_action {
        type = "forward"
        target_group_arn = aws_lb_target_group.web_tier_alb_target_group.arn
    }
}

# Attaching the first web instance to the Web-Tier ALB Target Group
resource "aws_lb_target_group_attachment" "web_tier_target_group_attachment_1" {
    target_group_arn = aws_lb_target_group.web_tier_alb_target_group.arn
    target_id = aws_instance.web_ec2_1.id
    port = 80
}

# Attaching the second web instance to the Web-Tier ALB Target Group
resource "aws_lb_target_group_attachment" "web_tier_target_group_attachment_2" {
    target_group_arn = aws_lb_target_group.web_tier_alb_target_group.arn
    target_id = aws_instance.web_ec2_2.id
    port = 80
}

# Creating The App-Tier ALB
resource "aws_lb" "app_tier_alb" {
    internal = false
    load_balancer_type = "application"
    security_groups = [aws_security_group.app_tier_alb_sg.id]
    subnets = [aws_subnet.private_subnet1_AZ1.id, aws_subnet.private_subnet1_AZ2.id]
}

# Creating The App-Tier ALB Target Group
resource "aws_lb_target_group" "app_tier_alb_target_group" {
    name = "app-tg"
    port = 80
    protocol = "HTTP"
    vpc_id = aws_vpc.main.id
}

# Creating The App-Tier ALB Listener
resource "aws_lb_listener" "app_tier_alb_listener" {
    load_balancer_arn = aws_lb.app_tier_alb.arn
    port = 80
    protocol = "HTTP"

    default_action {
        type = "forward"
        target_group_arn = aws_lb_target_group.app_tier_alb_target_group.arn
    }
}

# Attaching the first app instance to the App-Tier ALB Target Group
resource "aws_lb_target_group_attachment" "app_tier_target_group_attachment_1" {
    target_group_arn = aws_lb_target_group.app_tier_alb_target_group.arn
    target_id = aws_instance.app_ec2_1.id
    port = 80
}

# Attaching the second app instance to the App-Tier ALB Target Group
resource "aws_lb_target_group_attachment" "app_tier_target_group_attachment_2" {
    target_group_arn = aws_lb_target_group.app_tier_alb_target_group.arn
    target_id = aws_instance.app_ec2_2.id
    port = 80
}
                      
                      

After saving the above code in our 4th configuration file, we run ‘terraform plan’ in our terminal and get the following outcome:

pay attention to the order of steps

Once again, we can confirm that our new configurations have been added to the Terraform configuration plan. If you are following along you could also scroll through the terminal output after running the terraform plan command to see if all your resources have been included in the terraform plan and if there are any errors or warnings. In my case there aren’t any errors or warnings (because I already spent hours fighting the bugs, and won) so we proceed to next terraform configuration file which will be ‘Network-Resources-5.tf’.

Creating Fifth Terraform Configuration File

In this configuration file, we will be defining our Auto Scaling Groups along with a simple launch template.

  • Auto Scaling Group (ASG): automatically adjusts the number of EC2 instances in a group to meet demand. It maintains application availability and allows us to scale our EC2 capacity up or down automatically according to conditions we define. We will have an auto scaling group servicing our web-tier EC2 instances as well as our application-tier EC2 instances.
  • Launch-Template: set of configurations used to launch EC2 instances. It includes parameters such as instance type, security groups among other configurations, providing a standardized way to configure and manage instance settings.

The following code allowed us to provision these resources using Terraform:

# Define the launch template for the web tier instances
    resource "aws_launch_template" "web_tier_lt" {
    image_id = "ami-0427090fd1714168b"  # Amazon Machine Image ID for the instance
    instance_type = "t2.micro"  # Type of instance to launch

    # Network interface configuration
    network_interfaces {
        associate_public_ip_address = true  # Assign a public IP address
        security_groups = [aws_security_group.web_tier_sg.id]  # Security group for the web tier instance
    }

    # Tag specifications for the instance
    tag_specifications {
        resource_type = "instance"  # Type of resource being tagged
        tags = {
            Name = "web-tier-instance"  # Tag for the instance
        }
    }
}

# Define the launch template for the application tier instances
resource "aws_launch_template" "app_tier_lt" {
    image_id = "ami-0427090fd1714168b"  # Amazon Machine Image ID for the instance
    instance_type = "t2.micro"  # Type of instance to launch

    # Network interface configuration
    network_interfaces {
        associate_public_ip_address = false  # Do not assign a public IP address
        security_groups = [aws_security_group.app_tier_sg.id]  # Security group for the application tier instance
    }

    # Tag specifications for the instance
    tag_specifications {
        resource_type = "instance"  # Type of resource being tagged
        tags = {
            Name = "app-tier-instance"  # Tag for the instance
        }
    }
}

# Define the Auto Scaling group for the web tier
resource "aws_autoscaling_group" "web_tier_asg" {
    desired_capacity = 2  # Desired number of instances
    max_size = 3  # Maximum number of instances
    min_size = 1  # Minimum number of instances
    vpc_zone_identifier = [aws_subnet.public_subnet1_AZ1.id, aws_subnet.public_subnet1_AZ2.id]  # Subnets for the instances
    
    # Launch template configuration
    launch_template {
        id = aws_launch_template.web_tier_lt.id  # Launch template ID
        version = "$Latest"  # Version of the launch template
    }

    target_group_arns = [aws_lb_target_group.web_tier_alb_target_group.arn]  # Target group ARN for the load balancer

    # Tag for the Auto Scaling group
    tag {
        key = "Name"
        value = "web-tier-asg"
        propagate_at_launch = true  # Propagate the tag to instances launched by this group
    }
}

# Define the Auto Scaling group for the application tier
resource "aws_autoscaling_group" "app_tier_asg" {
    desired_capacity = 2  # Desired number of instances
    max_size = 3  # Maximum number of instances
    min_size = 1  # Minimum number of instances
    vpc_zone_identifier = [aws_subnet.private_subnet1_AZ1.id, aws_subnet.private_subnet1_AZ2.id]  # Subnets for the instances

    # Launch template configuration
    launch_template {
        id = aws_launch_template.app_tier_lt.id  # Launch template ID
        version = "$Latest"  # Version of the launch template
    }

    target_group_arns = [aws_lb_target_group.app_tier_alb_target_group.arn]  # Target group ARN for the load balancer

    # Tag for the Auto Scaling group
    tag {
        key = "Name"
        value = "app-tier-asg"
        propagate_at_launch = true  # Propagate the tag to instances launched by this group
    }
}
                        

After creating the new configuration file and saving it, as routine we can now run terraform plan. After which we get the following outcome from the terminal.

pay attention to the order of steps

Once again, Terraform has acknowledged our new configs and now we have a total of 43 resources to be created and deployed in our aws account. Next, we will now run the terraform apply command which will actually deploy the planned resources directly into our aws account. After running the command, we get the following output:

pay attention to the order of steps

After running the terraform apply we are prompted to review the resources to be applied, if we are okay with the to-be applied resources, we are to type out ‘yes’ in the terminal to continue.

After a few minutes we get the final output:

pay attention to the order of steps This is confirmation that all the resources have been successfully added to our aws account. We can now go into our aws management console to verify the changes terraform has applied to our account. We can start by checking on our EC2 instances.

Confirming Resource Creation

On the aws management console we can go into ec2 -> instances and see that we have 9 instances in total, which are our 2 web-tier instances, our 2 app-tier instances, 1 bastion host, 2 autoscaling groups and our 2 launch templates. pay attention to the order of steps

We can check on our security groups and confirm that they have also been created successfully by Terraform.

pay attention to the order of steps

Our application load balancers along with the listeners and target groups were also created successfully.

pay attention to the order of steps

We can also check on our launch templates which will be used to spin up new EC2 instances according to capacity demands.

pay attention to the order of steps

We can go over to our VPCs and confirm that a non-default VPC was created.

pay attention to the order of steps

We can also go over to our subnets are confirm that they were all successfully created.

pay attention to the order of steps

Our Internet gateway was also successfully created by Terraform.

pay attention to the order of steps

Our NAT-gateway was also successfully created by Terraform.’

pay attention to the order of steps

Our route tables were also created successfully.

pay attention to the order of steps

In general, every network resource, every connection, every attachment and every association we planned with terraform has successfully be applied into our aws account.

Creating Sixth Terraform Configuration File
The next thing to do would be to verify the configurations of our security groups and route tables. This is the exact reason we created our bastion host; we will use the bastion host to ping different instances within our architecture and to make sure that they restrict traffic correctly according to the ingress (inbound) and egress (outbound) rules of our security groups. For this we will need to create a TLS private key and an SSH Key Pair for our Bastion-Host

For this part of the test, we will go back to our code editor and create another configuration file which we can name ‘Network-Resources-6.tf’ in which we will be creating our TLS private key and our AWS key pair. We will also output the private key to a (.pem) file on our local machines.

  • TLS Private Key:a cryptographic key used in the TLS (Transport Layer Security) protocol to encrypt and decrypt data, ensuring secure communication over a network. It must be kept secret and is used in conjunction with a public key to authenticate and establish secure connections.
  • AWS Key Pair: An AWS Key Pair consists of a public key and a private key. The public key is stored by AWS and used to encrypt data or verify signatures, while the private key is downloaded and used by you to decrypt data or create signatures. Key pairs are commonly used for securely accessing EC2 instances via SSH.

Below is the code that will we include in our final configuration file, ‘Network-Resources-6.tf’.

# Create an SSH key pair
resource "tls_private_key" "bastion_host_key" {
  algorithm = "RSA"
  rsa_bits  = 2048
}

resource "aws_key_pair" "bastion_host_key" {
  key_name   = "bastion_host_key"
  public_key = tls_private_key.bastion_host_key.public_key_openssh
}

# Save the private key to a file
resource "local_file" "private_key" {
  content  = tls_private_key.bastion_host_key.private_key_pem
  filename = "${path.module}/bastion_host_private_key.pem"
}

# Create an SSH key pair
resource "tls_private_key" "web_tier_ec2_1_key" {
  algorithm = "RSA"
  rsa_bits  = 2048
}

resource "aws_key_pair" "web_tier_ec2_1_key" {
  key_name   = "web_tier_ec2_1_key"
  public_key = tls_private_key.web_tier_ec2_1_key.public_key_openssh
}

# Save the private key to a file
resource "local_file" "private_key_web" {
  content  = tls_private_key.web_tier_ec2_1_key.private_key_pem
  filename = "${path.module}/web_tier_ec2_1_privatekey.pem"
}

# Create an SSH key pair
resource "tls_private_key" "app_tier_ec2_1_key" {
  algorithm = "RSA"
  rsa_bits  = 2048
}

resource "aws_key_pair" "app_tier_ec2_1_key" {
  key_name   = "app_tier_ec2_1_key"
  public_key = tls_private_key.app_tier_ec2_1_key.public_key_openssh
}

# Save the private key to a file
resource "local_file" "private_key_app" {
  content  = tls_private_key.app_tier_ec2_1_key.private_key_pem
  filename = "${path.module}/app_tier_ec2_1_privatekey.pem"
}
                          

Below is a breakdown of the Tests to be conducted.

Test 1
  • Test 1, Part A:After SSH-ing into our bastion-host, we will then attempt to make a public test connection to our application-tier EC2 instances to ensure that the application-tier EC2 instances do not allow incoming public traffic.
  • Test 1, Part B: Connect into our web-tier ec2 instances and make a test connection into our application-tier EC2 instances to ensure that the application-tier instances are configured correctly (only allow incoming traffic from web-tier EC2)
Test 2
  • Test 2, Part A:After SSH-ing into our bastion-host, we will then attempt to make a public test connection to our RDS database instance in our database-tier to ensure that the database-tier does not allow incoming public connections.
  • Test 2, Part B: Connect into our application-tier ec2 instances and make a test connection into our RDS database instance inside our database tier to ensure it only allows incoming connections from our application-tier.

Test 1 (A): Bastion-Host -> Application-Tier EC2;

Make sure the Ingress rules for the Bastion-Host security group allow incoming traffic from YOUR IP address for you to successfully conduct these tests.

For Test 1 (A), we first need to SSH into our Bastion-Host. To do this we can simply go over to our Bastion-Host EC2 instance in our aws management console and go over to EC2 -> Instances, select the Bastion-Host EC2 instance and click ‘Connect’.

pay attention to the order of steps

On the next screen, we will go into the ‘SSH’ tab and the example ssh command.

pay attention to the order of steps

Next you can go back into the terminal and navigate into the directory on your local machine where you saved the .pem key, which for me is still in the ‘.aws’ folder.

sure to rename the (.pem) private key exactly as it is called in the example command.

After navigating to the directory in which your Bastion-Host (.pem) key is stored you can simply paste the example SSH command we copied from the aws management console.

pay attention to the order of steps

If you run into this error worry not, you simply need to go into PowerShell on Windows and type the following commands to restrict the permissions on the private key to only 1 user.

icacls bastion_host_key.pem /inheritance:r
icacls bastion_host_key.pem /grant:r username:F
                            

replace %username% with your current windows user and replace bastion_host_key with your Bastion-Host Key and then run the commands in PowerShell.

Use instructions above to edit (.pem) file permissions and SSH into web-tier EC2 instance.

Go back to your terminal and run the SSH command again.

pay attention to the order of steps

And now we have successfully SSH-ed into our Bastion-Host. Now all we need to do would be to ping any of our application-tier EC2 instances. I will now attempt to ‘ping’ my application-tier EC2 instance in availability zone 1. To do this I need the private IPv4 address of the application-tier EC2 instance.

Before we make a test connection to our private application-tier EC2 instance, here is an example of what a successful ping looks like. We are to send packets & receive packets. This means we can communicate directly with the machine and thus a successful connection.

pay attention to the order of steps

Now let us attempt to ping our private application-tier EC2 instance from our bastion-host.

pay attention to the order of steps

We can see that we are only able to send packets but we aren’t receiving any packets from the private application-tier EC2 instance meaning that our bastion cannot connect to our application-tier EC2 instance. Now let us SSH in our Web-Tier EC2 instance and attempt the same ping test.

Test 1 (B): Web-Tier EC2 -> Application-Tier EC2;

We will use the same method we used above when SSH-ing into our web-tier EC2 instance in availability zone 1.

pay attention to the order of steps

Now let us make an attempt to ping our private application-tier EC2 instances from our web-tier EC2 instance in availability zone 1.

pay attention to the order of steps

We can see from the image above the we are able to send & receive packets from our application-tier EC2 instance, only if we are doing so from our web-tier EC2 instances. This ensures that our private architecture remains secure and protected from unwanted incoming connections that might be malicious.

Test 2 (A): Bastion-Host -> RDS DB Instance

Now for our final test we will make an attempt to connect to our RDS database instance located in private subnet 2 of availability zone 1.

To do this we will SSH into our bastion-host once again and use the telnet command to make a test connection to our database instance. To achieve this we will need the endpoint for the RDS database instance. We can go over to our RDS database in the management console and copy our RDS endpoint.

pay attention to the order of steps

After copying the RDS endpoint, we can go back to our terminal and run the following command: telnet mydbinstance.cjy0ok6i4b0z.us-east-1.rds.amazonaws.com 3306.

pay attention to the order of steps

As expected, the connection attempt failed.

Test 2 (B): Application-Tier EC2 -> RDS DB Instance

To perform this test, we will need to SSH into our application-tier EC2 instance in order to make a test connection to our RDS database instance, but if you are following along, you would know that our application-tier instances only allow incoming traffic from our web-tier instances.

Therefore, we would need to SSH into our web-tier EC2 instances and then once inside, we would need to SSH once again into the application tier EC2 instance and then make our test connection to the RDS database instance. Let us commence.

To start off, so we do not encounter any problems on the way, we will need to use our application tier private key in order to successfully SSH into the application tier instance. At the moment our private keys are currently store on our local machines and we need the private key to be inside the web tiers root directory .

To solve this problem we will make use of scp command, which is a Secure Copy Command , which will allow us to send our private key file into the web-tier EC2 instance. To do this run the following command:

scp -i "C:\.aws\app_tier_ec2_1_key.pem" "C:\.aws\app_tier_ec2_1_key.pem" ec2-user@3.87.201.244:~/

Here's the breakdown:

  • -i "C:\.aws\app_tier_ec2_1_key.pem":Specifies the private key that enable us to make this file transfer (In this case this is the Web Tier Private Key).
  • "C:\.aws\app_tier_ec2_1_key.pem": The local file which we want to transfer.
  • ec2-user@3.87.201.244:~/ The destination directory on the EC2 instance, (In this instance we will be sending the file to root directory).
pay attention to the order of steps

After we have successfully sent the file to the web-tier EC2 instance, we can SSH into the web-tier EC2 instance.

pay attention to the order of steps

After SSH-ing into the web-tier instance, and running the ls command we can see that our file was transferred successfully. Now we can go back to the AWS management console and copy the exact SSH command that will allow us to SSH into the application-tier EC2 instance.

pay attention to the order of steps

After copying the command we can go back to our terminal and run the command.

pay attention to the order of steps

After running the command, we can see that this execution failed and that is due to the permissions of our application-tier private file permissions are to lax (low security). To rectify this we can run the following command:

chmod 400 "app_tier_ec2_1_key.pem"

This will tighten up the permissions associated with the private key and allows it to be ‘read-only’ by the owner of the file.

We can re-run the SSH command and we receive the following output.

pay attention to the order of steps

Now that we have successfully SSH-ed into our application-tier EC2 instance we can now run the telnet command:

telnet mydbinstance.cjy0ok6i4b0z.us-east-1.rds.amazonaws.com 3306

If you run into some errors while trying to run this command you might need to update yum. Run these 2 commands and then retry running the telnet command.

1.	sudo yum update
2.	sudo yum install telnet -y
                            

Once all issues had been ironed out, we received the following output:

pay attention to the order of steps

SUCCESS! This output lets us know that we have successfully connected to our database and that it is only accessible through the private application-tier EC2 instances and not by any other means, thus guaranteeing a highly secure application.

Challenges Faced & Solutions Implemented

1. Syntax for Defining Different Resources in Terraform

  • Challenge: Initially, I struggled with the syntax for defining various AWS resources using Terraform. This led to multiple errors and inefficiencies in my infrastructure code.
  • Solution: I addressed this challenge by thoroughly reading the Terraform documentation and referring to example configurations. Additionally, I utilized Terraforms plan and apply commands iteratively to test and validate my configurations incrementally. Participating in community forums and seeking advice from experienced peers also helped clarify complex syntax issues.

2. Keeping Track of Resource Names That Were Similar (Especially Private Keys)

  • Challenge: Managing and differentiating between multiple resource names, particularly the private keys, was confusing and prone to errors.
  • Solution: I implemented a consistent naming convention and tagging strategy for all resources. This involved prefixing and suffixing resource names with specific identifiers and using descriptive tags. I also maintained a detailed spreadsheet to document all resource names and their corresponding purposes, which helped in keeping track of them efficiently.

3. Keeping Track of Resource Names That Were Similar (Especially Private Keys)

  • Challenge: Creating and setting up key pairs for secure access to EC2 instances was initially challenging due to unfamiliarity with the process.
  • Solution: To overcome this, I followed AWS documentation meticulously to generate and configure key pairs. I used the AWS Management Console and AWS CLI for creating key pairs and downloaded them securely. I also automated part of the process using Terraform scripts to ensure consistency and accuracy in key pair creation.

4. Keeping Track of Resource Names That Were Similar (Especially Private Keys)

  • Challenge: Several issues arose while attempting to SSH into EC2 instances, including key naming inconsistencies, private key permission errors on my PC, dynamic IP address changes, and blank/null lines in the private key body.
  • Solution:
    • Key Naming Issues: I standardized the naming convention for key files and ensured they were correctly referenced in my SSH configuration and Terraform scripts.
    • Private Key Permissions: I adjusted the permissions of the private key file on my PC using the chmod 400 command to ensure it had the correct security settings for SSH access.
    • Dynamic IP Address: I set up a dynamic DNS service to keep track of my changing IP address and updated the security group rules accordingly to allow SSH access from my current IP.
    • Blank/Null Lines in Private Key: I edited the private key file using a reliable text editor to remove any blank or null lines that were causing SSH authentication errors.

Through these solutions, I was able to effectively manage and resolve the challenges faced during the implementation of my 3-tier application project on AWS. This not only improved the deployment process but also enhanced the overall security and efficiency of the infrastructure.

Clean Up and Conclusion

We have finally reached the end of this LONG road. We were able to successfully create and deploy and three-tier application in the cloud using and Infrastructure As Code tool called Terraform.

We successfully deployed an auto scaling group into the web and application tiers of our architecture ensuring scalability and the ability of our application to scale out when demand rises and to scale in once demand declines.

We were also successful in deploying an application load balancer that ensured correct traffic distribution and that no one EC2 instance was overloaded with requests while the other was dormant. This ensured that our application was high available.

We deployed security groups and placed some of our EC2 instances and an RDS database instance inside private subnets to ensure security.

Now, we need to tear down cloud architecture so that AWS does NOT suck every cent out of OUR POCKET! NO!

This is where the beauty of Terraform in. We can go back to our terminal and exit out of all the instances which we SSH-ed into. Once back on your local machine directory you can simply run the terraform destroy command and Terraform will destroy 98% of all the resources it created in your AWS account just as quickly as it built them. Very cool I know. This saves us so much time.

I mentioned 98% because from experience now I found out that for this specific cloud architecture there were 2 resources that were not destroyed once terraform was ran. These are the NAT-Gateway and the Elastic IP address of the NAT-Gateway. This was my experience and I would advise you to still check around your AWS account to make sure that most of the network resources were deleted!

Johnson Enyimba © 2024