logo
Menu
Build a CI/CD Pipeline for EKS Workloads with AWS Services

Build a CI/CD Pipeline for EKS Workloads with AWS Services

Create a CI/CD pipeline with Hashicorp Terraform based on AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline for building and deploying an app to Amazon EKS.

Ioannis Moustakis
Amazon Employee
Published Feb 19, 2024
Last Modified Feb 20, 2024
As software development has evolved, so too have the best practices enabling development teams to deliver software in a quick, safe, and scalable manner. An integral component of any modern microservices system is the Continuous Integration and Continuous Delivery (CI/CD) pipeline. This pipeline is responsible for building, testing, and deploying each microservice. A robust CI/CD pipeline provides developers with a streamlined series of automated processes, ensuring continuous integration, testing, and safe, timely software delivery. AWS offers a comprehensive suite of services tailored for building an efficient CI/CD pipeline for your applications.
In this tutorial, we will guide you through setting up AWS CodePipeline to orchestrate the CI/CD flow, AWS CodeCommit to host your code repository, AWS CodeBuild for testing, building, and deploying your application as a container and Elastic Container Registry (ECR) for storing the container image. This setup will allow you to build container images from new code changes, conduct basic tests, and deploy the freshly created container images to Amazon EKS. This tutorial leverages Terraform, one of the most widespread Infrastructure as Code tools, to provision the necessary components such as the VPC and the networking layer, the IAM roles and permissions, the EKS cluster, the ECR code repository, and the CI/CD components with CodeCommit, CodeBuild and CodePipeline, and Amazon EventBridge.

Prerequisites

Before you begin this tutorial, you need to:
  • Install the latest version of Terraform. To check your version, run terraform --version
  • Install and configure the latest version of the AWS CLI (v2). To check your version, run:
    aws --version.
  • Install the latest version of kubectl. To check your version, run: kubectl version --short.
Note that the AWS Region is picked automatically from your configured AWS Region in the AWS CLI (v2). To change the target region run: export AWS_REGION=<your_aws_region>
If you are requesting temporary credentials with the aws sts get-session-token command, make sure to export these 3 environments variables (substitute accordingly):
  • export AWS_ACCESS_KEY_ID=<your_aws_access_key_id>
  • export AWS_SECRET_ACCESS_KEY=<your_aws_secret_access_key>
  • export AWS_SESSION_TOKEN=<your_aws_session_token>

Overview

This tutorial is dedicated to setting up an end-to-end CI/CD pipeline for hosting and testing application code, building a container image from the code, pushing and storing the container image to Elastic Container Registry, and deploying this container on EKS as a deployment. It covers the following components:
Architecture for CI/CD with native AWS Services for EKS
Architecture for CI/CD with native AWS Services for EKS
  • Preparing the dedicated VPC for the cluster: As a basis, we created a dedicated Amazon VPC to host the EKS cluster. The VPC uses the CIDR 10.0.0.0/16, 3 private, and 3 public subnets. We create also an Internet Gatewayand a NAT Gateway. You can find the VPC configuration here.
  • Creating the EKS Cluster: We built an EKS Cluster with a managed node group with a minimum size 1 and maximum size 5, and one instance type of t3.small .
  • Storing the Application Code to AWS CodeCommit: Utilize the CodeCommit service to create a Git repository for the application code. A new code push on a specific branch of this git repository triggers CodePipeline, initiating the CI/CD flow. We use AWS EventBridge and create an event rule that detects new commits to the repository and triggers the pipeline.
  • Testing the Application Code with AWS CodeBuild: As a second stage of the CI/CD pipeline, we leverage CodeBuild to lint and test our application code. The configurations for linting and testing are provided as YAML files to AWS CodeBuild. CodeBuild expects a build specification (buildspec) file with the collection of commands and related sesttings that CodeBuild uses to run a specific stage in the pipeline.
  • Building a Container Image with AWS CodeBuild and storing it in Elastic Container Registry: As a third stage of the CI/CD pipeline, we build the container image based on the provided Dockerfile and the application source code with CodeBuild. The configuration file for building the Docker container image is proved as YAML file to CodeBuild. After successfully building the container image, we push it on the Elastic Container Registry repository.
  • Deploying the Container Image on EKS: Finally, as the last step of the CI/CD pipeline, we trigger an action with CodeBuild to deploy the newly created container image to EKS as a deployment. For this deployment, we use the helm Kubernetes package manager in the CodeBuild environment and provide the YAML deployment charts for our application.
For more detailed information on every component that we create with Terraform, check out Step 2.

Step 1: Clone the Demo Code Repository and Apply the Terraform Manifests to Create an EKS cluster, ECR repository, and the CI/CD Components.

1) First, open a terminal and execute the command below to clone the demo code repository locally:
1
git clone https://github.com/build-on-aws/cicd-pipeline-for-containers.git
2) Next, navigate to the top directory of the demo code repository:
1
cd cicd-pipeline-for-containers
3) Before applying the Terraform manifests, we have to initialize the project. Run the terraform init command, which performs several different initialization steps in order to prepare the current working directory for use with Terraform:
1
terraform init
4) After the successful project initialization, run this command to create the necessary AWS components and infrastructure:
1
terraform apply
After executing the command above, Terraform will present you with the planned infrastructure changes, which you will need to approve:
 
Terraform Apply Output
Terraform Apply Output
The expected output should look like this after successfully creating all the components (~15mins):
Apply complete! Resources: 76 added, 0 changed, 0 destroyed.

Step 2: Let’s Take a Look into What we have Deployed to our AWS Account with Terraform.

You can find the various components and building blogs that we have deployed in the different Terraform configuration files of the cloned repository. Although it's possible to include the Terraform configuration in a single file, we usually try to split the configuration across multiple files and group components together in a logical manner.
1) In the providers.tf file we define the necessary Terraform providers to install. In the local.tf and variables.tffiles we define tags and various variable values. If you would like to change any of the variables or tags values, update these files.
2) In the eks_cluster.tf file you can find the networking layer defining a VPC, 3 private, and 3 public subnets, a NAT Gateway, an EKS cluster with 1 EC2 node. Connect to the EKS cluster from the command line by updating the kubeconfig file:
1
aws eks --region <your_aws_region> update-kubeconfig --name demo-cluster
Validate that you can see the node:
1
kubectl get nodes
The expected output should look like this:
1
2
NAME STATUS ROLES AGE VERSION
ip-10-0-2-77.eu-west-1.compute.internal Ready <none> 5h32m v1.28.3-eks-e71965b
Alternatively, you can also see the cluster on the AWS Console by navigating to the EKS service:
AWS Console EKS Cluster
AWS Console EKS Cluster
3) In the iam.tf file we create all the required IAM roles and policies for the pipeline. Namely, we need an IAM role and a policy for CodeBuild, CodePipeline, the EventBridge trigger. In the policy_templates directory you can find the templates we use to create the IAM role for CodeBuild and CodePipeline. Note, that these permissions might be a bit permissive and should be adjusted for production usage.
4) In the ecr.tf file we define the configuration needed to create the ECR container image repository. You can see the created container image repository on the AWS console:
ECR Container Image Repository
ECR Container Image Repository
5) In the codecommit.tf, codebuild.tf, and codepipeline.tf files you can find the new code repository, the stages of the CI/CD pipeline, the CodeBuild configuration, and a few S3 buckets used to store CI/CD artifacts and logs. In the pipeline_files directory, you can find the scripts used by CodeBuild in each of the test, lint, build, and deploy steps of the pipeline. Head to AWS CodePipeline to get an overview of the whole CI/CD flow. Initially, the pipeline will be in Failed status as the CodeCommit repository we created is empty. Next, we will push our code to trigger the pipeline.
6) In the sample-cluster-app repository, you can find the demo application code and the Dockerfile used to build the container image. In the helm_charts directory, you can find the Kubernetes deployment charts for the application.

Step 3: Push the Demo Application Code to the AWS CodeCommit Repository.

In this section, we will upload the demo application code, the AWS CodeBuild configuration files, and the helm manifests to deploy the app to our EKS cluster.
1) To do that, navigate via the AWS CodeCommit Console to your newly created AWS CodeCommit repository demo-codecommit-repo
2) Click the button Clone URL , and then select Clone HTTPS if you plan to use Git credentials for CodeCommit, or Clone HTTPS (GRC) If you want to connect to CodeCommit using a root account, federated access, or temporary credentials.
Clone CodeCommit Repository
Clone CodeCommit Repository
3) Navigate back to the same directory where you cloned the previous repo. Copy and paste the clone url to your command line. It would look similar to this (update your AWS region):
1
cd ..; git clone codecommit::<your_aws_region>://demo-codecommit-repo
4) Then copy these 3 directories from the first code repository you cloned to the new one and push the code changes:
1
2
3
cp -r cicd-pipeline-for-containers/helm_charts demo-codecommit-repo/helm_charts/
cp -r cicd-pipeline-for-containers/pipeline_files demo-codecommit-repo/pipeline_files/
cp -r cicd-pipeline-for-containers/sample-cluster-app demo-codecommit-repo/sample-cluster-app/
5) Then, navigate to the demo-codecommit-repo directory, add, commit, and push the changes:cd demo-codecommit-repo
1
2
3
git add helm_charts/ pipeline_files/ sample-cluster-app/
git commit -m "Add demo application code, CodeBuild config files, and helm charts"
git push origin main
After successfully pushing the new code, navigate via the AWS Console to AWS CodePipeline to see it in action. Your new code, should trigger a new release change and kickstart the CI/CD process.
Note that If you are using macOS, use HTTPS or HTTPS (GRC) to connect to a CodeCommit repository. After you connect to a CodeCommit repository with HTTPS for the first time, subsequent access fails after about 15 minutes. The default Git version on macOS uses the Keychain Access utility to store credentials. For security measures, the password generated for access to your CodeCommit repository is temporary, so the credentials stored in the keychain stop working after about 15 minutes. To fix this, check out set up the credential helper.

Step 4: Validate the successful Pipeline execution and deployed application.

In this section, you will validate that the CI/CD pipeline completed successfully, our container image is stored on ECR and we have deployed our demo application to EKS.
1) Navigate to your demo-codepipeline AWS CodePipeline on the AWS console and verify that the pipeline has been triggered and that eventually all the stages run successfully. If you want to re-trigger the pipeline, you can either push a new commit to the CodeCommit repository or click the Release change button on the demo-codepipelineview.
2) Navigate to the ECR container image repository demo-ecr-repo and validate than image with tag latest has been pushed. The ECR container image repository should look like this:
ECR Pushed Image
ECR Pushed Image
3) Then, connect to the EKS cluster via the command line to check our deployed applicatation (update with your AWS region):
1
aws eks --region <your_aws_region> update-kubeconfig --name demo-cluster
4) Check for deployed pods on the sample-cluster-app namespace:
1
kubectl get pods -n sample-cluster-app
The expected output should look like this:
1
2
NAME READY STATUS RESTARTS AGE
sample-cluster-app-85cdd56f86-bbw25 1/1 Running 0 23m
To clean up, run on the initial cloned repository:terraform destroy
If you encounter an issue while deleting the VPC, navigate to the AWS Console on the VPC service. Select Endpoints and delete the cluster-vpc endpoint manually:
Delete Endpoints
Delete Endpoints
After a few minutes, the endpoint is deleted. Continue with deleting the Network Interfaces:
Clean Up Network Interfaces
Clean Up Network Interfaces
And finally, delete the cluster-vpc and its subnets:
Delete VPC
Delete VPC

If you rerun terraform destroy you should now see this output:
1
2
3
No changes. No objects need to be destroyed.
Either you have not created any objects yet or the existing objects were already deleted outside of Terraform.
Destroy complete! Resources: 0 destroyed.

Conclusion

With the completion of this tutorial, you have successfully configured a CI/CD pipeline tailored for microservices using AWS's comprehensive CI/CD tools by combining AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and Elastic Container Registry. Setting this pipeline to deploy new code changes, we've empowered developers to automate processes, ensuring that new code changes swiftly move from integration to deployment in the EKS environment. By leveraging the capabilities of AWS to build your CI/CD pipelines, you 're setting standard for quality, speed, and reliability throughout your software delivery process. These final installations will provide you with a robust, fully functional CI/CD pipeline environment, ready for testing, building, and deploying applications.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.