[AWS] Building a Highly Available & Scalable Microservices E-commerce CI/CD Pipeline with Jenkins, Docker, Kubernetes, and EKS

Project Overview

⭐ GitHub Repo For This Project ⭐

In this project, we will build a robust Continuous Integration and Continuous Deployment (CI/CD) pipeline tailored for a microservices-based e-commerce platform. The pipeline will leverage Jenkins for automation, Docker for containerization, Kubernetes for orchestration, and Amazon EKS for scalable deployment in the cloud.

The objective is to streamline the development, testing, and deployment processes by creating a multi-environment pipeline that ensures seamless integration and rapid delivery of new features. This project will cover the full lifecycle from code commit to production deployment, enabling a resilient and scalable e-commerce platform capable of handling dynamic workloads and delivering high availability.

Objectives:

  • Automate the CI/CD Process: Implement Jenkins to automate the build, test, and deployment processes, ensuring consistent and efficient delivery of code changes.
  • Containerize Applications: Use Docker to create containerized versions of microservices, facilitating consistent environments across development, testing, and production.
  • Orchestrate with Kubernetes: Deploy and manage containers using Kubernetes to ensure scalability, reliability, and efficient resource utilization.
  • Leverage Amazon EKS: Utilize Amazon Elastic Kubernetes Service (EKS) to manage Kubernetes clusters in the cloud, providing a scalable and fully managed environment for running microservices.
  • Ensure Continuous Delivery: Design the pipeline to support continuous integration and continuous delivery practices, allowing for frequent and reliable updates to the e-commerce platform.
  • Enhance Deployment Efficiency: Optimize deployment strategies to minimize downtime and maintain high availability for the e-commerce platform.

Components

  • Ubuntu Server (Linux): Private virtual instance which we will configure. It will also be our primary work environment .
  • EKS Cluster (AWS): Amazon EKS Cluster on which we will be deploying our containerized application to.

Tools & Technologies

  • AWS CLI: AWS Command-Line Interface which we will connect to and configure in order to provision resources in our AWS account
  • Kubectl: Kubernetes Command-Line interface which will be used to setup kubernetes and provision resources in kubernetes.
  • EKSctl: EKS Command-Line interface where we will create our EKS cluster, nodegroup, service account and service roles.
  • Docker: Application to containerize our application and push our dockerimages to our dockerhub repository.
  • Jenkins:Application used to automate the building, testing and deployment of our microservice application

Deliverables

  • Jenkins CI/CD Pipeline Configuration: Jenkinsfile or pipeline scripts showing the steps for building, testing, and deploying the microservices.
  • Docker Images and Dockerfiles: Dockerfiles for each microservice, along with Docker images stored in a container registry (Docker Hub).
  • Documentation (This Document): Detailed documentation covering the setup process, CI/CD pipeline flow, how to scale and manage the microservices, and troubleshooting steps.

Why Microservices?

Microservices is an architectural style where an application is broken down into small, independent services that communicate with each other. Each service focuses on a specific function, allowing for easier scalability, flexibility, and maintenance.

  • Scalability: Microservices allow individual components of an application to be scaled independently, optimizing resource usage and improving performance as demand for specific services grows.
  • Flexibility and Agility: Each microservice can be developed, deployed, and updated independently, enabling faster development cycles, easier maintenance, and the ability to use different technologies or programming languages for different services.
  • Resilience: By isolating services, microservices architecture reduces the impact of failures. If one service goes down, it doesn’t necessarily bring down the entire system, enhancing overall system reliability and uptime.

Implementation

To kick off this project lets first take a look at the application which we will be automating using Jenkins. This ecommerce application allows users to browse items, select items, add the items to cart fill in their shipping information and finally proceed to checkout all the items in their cart.

Have a look: Ecommerce app
Description of the image Description of the image
Have a look: Microservices

Let us now take a closer look into the inner workings of this application. This eccommerce application has been broker up into microservices which are simply small, loosely coupled, and independently deployable services that together make up an application.

Let us take a look at the microservices that make up this ecommerce application.

Description of the image

This is a list of the 11 different microservices that make up our ecommerce application stored in our GitHub repository. Each one of them serving a different purpose. Each service in a microservices architecture is responsible for a specific business capability and can be developed, deployed, and scaled independently.

This is beneficial because it allows teams to develop, test and deploy independently which ultimately speeds up development, testing and deployment times.

This type of small loosely coupled architecture is quite a common practice in actual production environments as each microservice can be altered, built and deployed independently without affecting any of the other components in the ecommerce application.

To successfully automate the CI/CD process, anytime changes are made to any of our individual microservices, our pipeline needs to automatically detect the new changes, build and deploy them into our production environment. We will use Jenkins to build 11 separate pipelines to service 11 of our microservices. These 11 pipelines will make up the CI portion of our project.

1 more pipeline which will be the main pipeline will be used to make the actual deployment of our new changes to our application, this will make up the CD portion of our project. To allow Jenkins to automatically detect changes and initiate a pipeline build action, we will make use of a feature in Jenkins known as a multibranch webhook trigger. We will be using GitHub for source repository.

Create an EC2 Instance

We will start off by creating a linux EC2 instance in which we will be working from. Head on over to the AWS management console and start by selecting the region that is closest to you to ensure low latency and reduced costs. The region closest to me was af-south-1 which is located in cape town.

Description of the image

Next, go to EC2 and create a new security group that will allows us to interact with our linux instance effectively. Create the security group with the following inbound rules.

Description of the image

Now go to Instances, and create a Ubuntu Linux EC2 instance, I named mine MyUbuntuServer, be sure to select Ubuntu Server 24.04 LTS (HVM), SSD Volume Type as the Amazon Machine Image (AMI).

Description of the image

Scroll down to the Instance type and select ‘t3.large’ as the instance type. In the Key Pair section, create a new key pair, select RSA as the Key Pair type and ‘.pem’ as the private key format.

Description of the image Description of the image

Finally scroll down to the Network Settings section for the security group, choose ‘select existing security group’ and choose the security group we created earlier. Next scroll down to the Configure Storage section and make sure to allocate at least 25GiB of storage to your new instance.

Description of the image

Select ‘Launch instance’ and create the new instance.

Connect and update the new instance

Once the new instance has been successfully created, select the instance and click ‘connect’ .

Description of the image

Select the ‘SSH Client’ tab and copy the SSH command.

Description of the image

Open up the cmd terminal as an administrator and navigate to the directory where you have stored your .pem private key and paste the SSH command

Description of the image

After successfully SSH-ing into the new EC2 instance. Run the

sudo apt update
command to update all the packages that are available in the default linux repository.

Description of the image
Create new IAM user (Access Keys)

Now we need to configure our AWS CLI which will allow us to connect to and interact with our AWS account. Head over to IAM and create a new IAM user with programmatic access. Attach the following permissions directly to the new user.

Description of the image

Next you need to add once more inline policy which I called eks-req and attach it to the new IAM user. Copy the inline policy below:


{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": "eks:*",
          "Resource": "*"
      }
  ]
}
            

Select the newly created IAM user and go into the Security Credentials and select create access key.

Description of the image

Next select Command Line Interface as the use case for the new access key.

Description of the image

You will be presented with a screen showing you your newly created access keys. Copy the keys and store them safely on your device as we will make use of them soon.

Description of the image
Configure AWS CLI

Now to configure the AWS CLI we will run a couple of commands inside our Ubuntu Linux server. You can copy and run the following commands:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
              
Description of the image

after successfully running these above commands, we can now actually configure our aws cli. Start by running the following command:

aws configure

Use the information from your Access Key to fill in these prompts:

AWS Access Key ID: DUSJDXXXXXXXATJE
AWS Secret Access Key: YFFJFFHJXXXXXX2PDt
                

For the ‘Default region name’, I used af-south-1 this is the closest region to me.

For the ‘Default output format’ I inputted ‘text’.

Description of the image

To confirm successful installation of the AWS CLI, you can run the following command:

aws –-version
Configure Kubectl

Now we need to configure our kubectl, which is the command line interface which we will use to create and interact with Kubernetes . To configure kubectl we can run the following commands:

curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.30/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
              

It is recommended to always use the latest version of Kubernetes, at the time of me creating this post the latest version was Kubernetes 1.30.


To confirm successful installation of the kubectl, you can run the following command:

kubectl version --short –client
Configure EKSCTL

Here we will configure our EKS CTL which will allows us to create and interact with our EKS cluster. We can configure it by running the following commands:

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
                sudo mv /tmp/eksctl /usr/local/bin
                

To confirm successful installation of the kubectl, you can run the following command:

eksctl version
Create EKS Cluster & nodegroup

Next we need to create our EKS cluster inside our AWS account. The EKS cluster is where we are going to deploy our containerized applications. Once our applications are deployed onto EKS we will receive a deployment url which we can use to view and interact with our application.

To create the EKS cluster we will have to run the following commands:

eksctl create cluster --name=EKS-1 \
                      --region=af-south-1 \
                      --zones=af-south-1a,af-south-1b \
                      --without-nodegroup
              

Here we are creating an EKS cluster with the name ‘EKS-1’. We are then creating the cluster in the region af-south-1 ¸ and then we are specifying the 2 availability zones with the af-south-1 which are af-south-1a & af-south-1b. We are also declaring our EKS cluster to be created without nodegroup, as we will be creating the nodegroup ourselves. Running this command will take about 8 -15 minutes.

Next we have to specify our oidc provider and associate it with our EKS cluster, this is to allow the service account that we will create in our EKS cluster can assume the IAM Roles . The IAM Roles gives permissions to the EKS cluster to create and interact with other services within our AWS account.

eksctl utils associate-iam-oidc-provider \
    --region ap-south-1 \
    --cluster EKS-1 \
    --approve
            

Now let us create the nodegroup with the following command:

eksctl create nodegroup --cluster=EKS-1 \
                        --region=af-south-1 \
                        --name=node2 \
                        --node-type=t3.medium \
                        --nodes=3 \
                        --nodes-min=2 \
                        --nodes-max=4 \
                        --node-volume-size=20 \
                        --ssh-access \
                        --ssh-public-key=ubuntu-key-pair \
                        --managed \
                        --asg-access \
                        --external-dns-access \
                        --full-ecr-access \
                        --appmesh-access \
                        --alb-ingress-access
             

This command creates our nodegroup with the name node2 the af-south-1 region with 3 nodes . We have set a minimum of 2 nodes and maximum of 4 nodes, so that if our application is receiving high amounts of traffic our nodegroup can scale the number of nodes to 4 and when the traffic to our application decreases, our nodegroup can scale the number of nodes down to 2. This is how our EKS cluster will handle auto-scaling. We will access our nodegroup using SSH access and we will need to provide the public key named ubuntu-key-pair which we created when creating our Linux Ubuntu EC2 instance.


At this point we can go over to our AWS management console and have a look to see if our EKS cluster was successfully created.

Description of the image

We can confirm that our EKS cluster was successfully created in our AWS account.

Install Jenkins

To install Jenkins on our ubuntu machine we first need to install Java. We can do this by simply running the following command:

sudo apt install openjdk-22-jre-headless -y

This will install version 22 of Java on our machines, ensure to always install the latest version of Java to avoid encountering any issues when running or installing Jenkins. After installing Java, we can proceed to now install Jenkins. To do this we can simply go over to our browser and search up “Jenkins install”.

We will be greeted with this page and here we can see the directions on how to install Jenkins for different operating systems. We are using a Linux server and thus we must select ‘Linux’.

Description of the image

From this page we can use these commands to install Jenkins:

sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
  https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
  https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install Jenkins
            

after successfully installing Jenkins, you can verfiy the successful installation by simply copying the public IP address of our Linux Ubuntu server and adding :8080 to end and searching that up in our browser. In our case this will be:

13.246.229.181:8080

You should see the following screen, on which you should select the option to ‘Install suggested plugins’.

Description of the image

After selecting that option Jenkins will begin installing the plugins.

Description of the image
Install Docker

Let us now install docker on your Ubuntu machine. To do this you can simply type docker in the ubuntu machine terminal and you will see an error message along with a script on how to install docker. If not, you can simply install docker using the following command:

sudo apt install docker.io -y 

after successfully installing docker, we now need to give permissions to all the users on our ubuntu machine (including Jenkins) to allow them to execute docker commands. We can do this by executing the following command:

sudo chmod 666 /var/run/docker.sock
Configure Jenkins

After Jenkins has installed all the necessary plugins, we will then find ourselves on the following page.

Description of the image

Fill in the information and click on ‘Save and Continue’

Description of the image

Next, once inside Jenkins we can navigate to ‘Manage Jenkins’ on the left-hand panel and select plugins.

Description of the image

Here we will search for and install the following docker & Kubernetes plugins.

Description of the image

After installing the plugins have finished installing, we can now navigate over to ‘Manage Jenkins’ and select ‘Tools’.

Description of the image

Now we need to create docker credentials that Jenkins will use to build and push images into our dockerhub repository. For this you would need to sign up with dockerhub and use the credentials you have just created for this exercise. You can now navigate to ‘Manage Jenkins’ and then select ‘credentials’.

Description of the image

After selecting ‘Credentials’ you can select ‘global’ and then proceed to create new credentials called ‘docker-cred’ using your actual dockerhub login credentials.

Description of the image
Create Multibranch Pipeline in Jenkins

In this stage we are ready to create our multibranch pipeline in Jenkins. In Jenkins go back to the dashboard and select ‘New Item’.

Description of the image

Next give this new item a name and select ‘Multibranch Pipeline’ and click ‘OK’.

Description of the image

Now we can configure our multibranch pipeline. We can scroll down to Branch Sources and select ‘Add source’.

Description of the image

Now you can select ‘Git’ as the Branch Source. The Git source we will be using here will be a GitHub repository. I have used my own GitHub repository where I have stored the code files for each of the microservices (branches).
For you to successfully complete this project you would need to use your own GitHub repository as the branch source as we will be configuring a webhook which will be the trigger for pipelines to get activated and you cannot activate the webhook configured on someone else’s GitHub account.


At this point you can clone my GitHub repository and upload the file into your own GitHub account. GitHub Repository: https://github.com/supertonka/Microservice

Add your GitHub repository as the project repository and select the ‘docker-cred’ credentials we created earlier.

Description of the image

Next we can scroll down to the ‘Build Configuration’ section and the Script path section dictates the name of the file within each of our microservices, which Jenkins is going to look for in order to build the pipelines. The microservices code files have already been configured with the Jenkinsfile.

The next section we are to turn our attention to is the ‘Scan Multibranch Pipeline Triggers’ section in which we can select the ‘Scan by webhook’ option. We can put any name in the ‘Trigger token’ section and then click the little question mark icon next the word ‘Trigger token’ to reveal the trigger token URL beneath.

Description of the image

You can now edit the trigger token URL and replace [Trigger token] with the name you gave to Trigger token, which in my case was ‘supertonka’. Next you need to replace the JENKINS_URL part with the public IP address and port used to access Jenkins. So in my case my final Trigger token URL was:

http://13.246.229.181:8080/multibranch-webhook-trigger/invoke?token=supertonka

Now head over to the GitHub repository where you have uploaded/cloned the Microservice ecommerce code files and select ‘Settings’.

Description of the image

After we have clicked Settings, we then select ‘Webhooks’ on the left-hand panel, next select ‘Add webhook’. For the payload URL, here we enter the trigger token URL we edited earlier. For the ‘Content type’ we select ‘application/json’ and before we click ‘Add webhook’ we need to go back to Jenkins and click Apply in our multibranch pipeline configuration. Then we can click ‘Add webhook’ here.

Description of the image

Next, we can go into our Jenkins dashboard and observe 👀 the pipelines automagically being built for each of our microservices.

Description of the image

Here we can see that each of our microservices have been assigned a pipeline. Now these pipelines are triggered when a change is made to the source code (stored in GitHub repository) but we do not yet have a pipeline that deploys our changes to our public facing ecommerce website. To achieve this, we need to create the CD part of CI/CD we will need to create a final pipeline called the main pipeline.

Create Service Account & Roles

In this step we will be creating a service account, we will also be creating a service role which we will assign to the service account and then we will use the service account to perform our deployments.

We will begin with creating the service account. We will go back to the terminal and create a yaml file called svc.yml. Inside this file we will add the instructions to create our service account. To create this yaml file, type the following command in your ubuntu terminal:

vi svc.yml

This will open up an empty document in which you can paste and save the following text:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: webapps
              

The name for our service account will be Jenkins and we will be creating this service account inside the Kubernetes namespace called ‘webapps’. Next let us actually create the namespace webapps kubernetes. To do this you may run the following command:

kubectl create namespace webapps

Once we have successfully created the namespace, we can go ahead and apply the svc.yml file using the following command:

kubectl apply -f svc.yml
Description of the image

Now we need to create the role which we are going to assign to the service account, we can do this by opening up another blank file using vi with the name role.yml:

vi role.yml

and then paste the following to populate the role.yml file:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: app-role
  namespace: webapps
rules:
  - apiGroups:
        - ""
        - apps
        - autoscaling
        - batch
        - extensions
        - policy
        - rbac.authorization.k8s.io
    resources:
      - pods
      - componentstatuses
      - configmaps
      - daemonsets
      - deployments
      - events
      - endpoints
      - horizontalpodautoscalers
      - ingress
      - jobs
      - limitranges
      - namespaces
      - nodes
      - pods
      - persistentvolumes
      - persistentvolumeclaims
      - resourcequotas
      - replicasets
      - replicationcontrollers
      - serviceaccounts
      - services
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
            

save and close the role.yml file and then we can apply this role in Kubernetes by running the following command:

kubectl apply -f role.yml
Description of the image

Now we need to assign the role to the service account. We can do this by creating a new file name bind.yml using:

vi bind.yml

Inside this file you can paste the following:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-rolebinding
  namespace: webapps 
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: app-role 
subjects:
- namespace: webapps 
  kind: ServiceAccount
  name: jenkins
            

you can save and close the file, and then apply the file using this command:

kubectl apply -f bind.yml
Description of the image

Now that we have created our service account, created our service role, bound our service role to our service account, but now in order to use the service account for deployment we need to create a token for the service account which will be used as authentication. To generate this token, we will need to create a new file called sec.yml using:

vi sec.yml

We must then populate the file with the following:

apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: mysecretname
  annotations:
    kubernetes.io/service-account.name: jenkins
            

save and exit the file.
To apply the file use the following command:

kubectl apply -f sec.yml -n webapps
Description of the image

To generate the actual token we can use the following command:

kubectl describe secret mysecretname -n webapps
Description of the image

After executing the above command, we get the following output, the token has been generated for us and we can now copy this token and save it on a notepad or text-editor.

Creating CD Pipeline

Now we will have to go back into Jenkins and create a new single pipeline, which we will not deploy but simply use to extract some information.

Description of the image

After filing in the new item name, and selecting the pipeline you can click on ‘OK’. On the next page scroll down to the ‘Advanced Project Options’. Here you can select the ‘Hello World’ script and we will now make some changes to this script.

Description of the image

Next, we will click on ‘Pipeline Syntax’

Description of the image

Next, we should click ‘Add’ and land on the following screen:

Description of the image

For ‘Kind’ select ‘Secret text’ and in the ‘Secret’ tab paste the token we generate after running the kubectl describe secret mysecretname -n webapps command. Click on ‘Add’ and proceed.
Going back to the previous page we can now select our new created credentials.

Description of the image

Next, we need to input our Kubernetes API endpoint, which we will find in the ‘Overview’ tab of our EKS cluster.

Description of the image

After inputting our API Server endpoint, we now need to input the name of our EKS cluster and the namespace, which are EKS-1 and webapps , respectively.

Description of the image

We can now hit ‘Generate Pipeline Script’ and copy the outputted code.

Description of the image

Going back to our pipeline configuration we can edit merge our newly copied pipeline script with our previously generated ‘Hello World’ script to create the final outcome:

pipeline {
    agent any

    stages {
        stage('Deploy To Kubernetes') {
            steps {
                withKubeCredentials(kubectlCredentials: [[caCertificate: '', clusterName: 'EKS-1', contextName: '', credentialsId: 'k8-token', namespace: 'webapps', serverUrl: 'https://CC0A1C07FF2B9F62C66881FB1E3CA81C.gr7.af-south-1.eks.amazonaws.com']]) {
                    sh "kubectl apply -f deployment-service.yml"
                    
                }  
            }
        }
        
        stage('verify Deployment') {
            steps {
                withKubeCredentials(kubectlCredentials: [[caCertificate: '', clusterName: 'EKS-1', contextName: '', credentialsId: 'k8-token', namespace: 'webapps', serverUrl: 'https://CC0A1C07FF2B9F62C66881FB1E3CA81C.gr7.af-south-1.eks.amazonaws.com']]) {
                    sh "kubectl get svc -n webapps"
                }
            }
        }
    }
}
          

This code is now the Jenkinsfile which we will add into our microservices repository to create the final pipeline of this project, the CD pipeline.

We can now go into the ‘main’ branch in our GitHub repository and add a new file named ‘Jenkinsfile’. This file will prompt Jenkins to read the ‘main’ branch as a pipeline and will begin to build it and deploy the application into our EKS cluster.

Go over to GitHub and select ‘Add file’ in the ‘main’ branch and then ‘Create new file’. Name the file ‘Jenkinsfile’ and paste the contents of pipeline script we created above.

Description of the image

Next you should name the new file ‘Jenkinsfile’ and paste the pipeline script we have created and click ‘commit changes’

Description of the image

As soon as we commit these changes we can quickly go over to Jenkins and we notice that a new pipeline has been created and begins immediately being built. This also lets us know that the webhook we have configured is also functioning correctly.

Description of the image

After a few seconds our main pipeline has been successfully created and built hooray! 😎🎉💋

Description of the image

Now finally, to view our deployed web application running on our amazon EKS cluster we can go into our main branch and select ‘console output’ on the left-hand panel.

Description of the image

We can scroll down and copy the external IP address provided to us by the loadbalancer.

Description of the image

Pasting the external IP address into our browser we are delighted to see our ecommerce webapp up and running! 🤯

Description of the image

Challenges Faced and Solutions Implemented

Setting Up the Development Environment
  • Challenge: Configuring the development environment to support multiple tools and technologies (such as Docker, Kubernetes, and Jenkins) was time-consuming and required precise setup to avoid conflicts.
  • Solution: Created detailed environment setup documentation and used version control to manage configuration files. This ensured consistency across development environments and minimized setup errors.
Integrating Docker with Jenkins
  • Challenge: Integrating Docker with Jenkins for automated builds and deployments presented issues, particularly in setting up Docker within Jenkins pipelines.
  • Solution: Installed and configured the Jenkins Docker plugin and set up Docker credentials within Jenkins. This allowed seamless interaction between Jenkins and Docker, enabling automated builds and deployments directly from Jenkins.
Handling YAML Formatting Issues
  • Challenge: YAML formatting errors in configuration files like Kubernetes manifests and Jenkins pipelines caused deployment failures and were challenging to debug.
  • Solution: Used YAML linting tools and integrated them into the CI pipeline to automatically check for formatting issues before applying configurations. This reduced deployment failures due to simple formatting errors.

Conclusion & Clean Up

This concludes our project! We successfully created our Ubuntu instance, SSH-ed into it, installed the awscli, kubectl and eksctl.

We then created our ekscluster and it nodegroup. We then installed java, Jenkins and then docker. We then went on to configure Jenkins with the correct plugins and credentials. We then created our multibranch pipeline and configured it to work with our GitHub repository and setup the multibranch scan webhook trigger to automatically detect changes in our applications source code.

Finally, we created our service account, a service role and attached the service role to service account. Next, we generated a secret token which we used to generate a pipeline script which we then used to create the Jenkinsfile for the main branch that allowed to automate the continuous deployment (CD) portion of our project💯.

For this project we integrated a bunch of tools and technologies, luckily most of them were created and deployed inside our Ubuntu server which we can easily destroy. The resource that might cost us the most will the EKS cluster created in our AWS account.

The final thing to do for this project would be to clean up all the resources we have created in our AWS Account. We do this so that AWS DOES NOT CRIPLE US FINANCIALLY!

Mainly we would need to delete our EKS cluster and all the resources that were created along with it. I did this project and as soon as I was done, I went straight to my aws management console and manually started deleting all the resources . So, with that said, I am NOT going to set this whole architecture up again and potentially get charged up to $15-20 by the time I wake up simply to show you what to delete. The best I can do in this case is simply to alert you of what to go manually delete.

Which are:

  • EKS VPC (autocreated):
  • EKS Cluster + EKS nodegroup
  • NAT Gateway
  • Internet Gateway
  • Public & Private Subnets
  • IPv4 Addresses
  • Elastic IP addresses
  • Network Interfaces

Some of these actions precede other actions, so begin with the lower-level resources like delinking the IPv4 addresses and then move on to the internet gateways and the NAT Gateways and then move on to the subnets and then finally the VPC itself.

ONE MORE TIP. WHEN TRYING TO FIND ALL THE RESOURCES CREATED FOR YOUR EKS CLUSTER SIMPLY GO OVER TO THE NEWLY CREATED VPC DEDICATED TO YOUR EKS CLUSTER AND TAKE A LOOK AT THE RESOURCE MAP, THIS WILL GIVE YOU AN OVERVIEW OF ALL THE RESOURCES ASSOCIATED WITH THIS NEWLY CREATED VPC.

Description of the image
Johnson Enyimba © 2024