logo
Menu

Build Efficient CI/CD Pipelines for Connected Microservices in Under an Hour Using AWS Copilot

Here's how to build CI/CD pipelines for connected microservices using AWS Copilot. This should accelerate deploying and hosting container-based service on the AWS Cloud using Amazon ECS.

kk
Amazon Employee
Published Apr 26, 2023
Last Modified Mar 15, 2024
With the rise of application modernization, a significant topic of discussion is breaking down monolithic applications into microservices. Essentially, the process begins with breaking the monolith into its individual working parts, making it easier to create a virtualized application environment using tools like containers. During the process, another question arises: whether to use a single (mono) repository for all microservices or to keep each microservice in its own repository.
Using a repository for each microservice has several benefits:
  • Facilitating faster software release
  • Enabling the creation of smaller teams to develop and deliver a single service
  • Allowing the team to maintain a smaller codebase, reducing complexity
  • Allowing for faster build and deployment processes with a smaller codebase
  • Allowing for the freedom to write code independently and differently from all other services (using different programming languages, libraries, approaches, etc)
Creating a separate repository for each microservice benefits a team in maintaining its own development and release cycles. Having said this, the next question that arises is how to set up the required infrastructure easily when the microservices are interconnected with each other. As with most things, there is a trade-off: the complexity doesn't just magically disappear. In this instance, the complexity of having multiple parts of a system in single application is reduced by splitting them into separate services, and the complexity is moved to the infrastructure and coordination. Luckily the tooling and approaches to manage this have improved in the last decade as microservice architecture has matured.
This tutorial will show you how you can take advantage of AWS Copilot CLI - that accelerates you to build efficient CI/CD Pipelines for connected microservices in under an hour using AWS Copilot. AWS Copilot CLI is a tool for developers to build, release, and operate production-ready, containerized applications on AWS App Runner, Amazon ECS, and AWS Fargate. More information about Copilot, core concepts can be found in the Copilot official documentation.
About
βœ… AWS experience200 - Intermediate
⏱ Time to complete60 minutes
πŸ’° Cost to completeFree tier eligible
🧩 Prerequisites- AWS Account
πŸ’» Code SampleCode sample used in tutorial on GitHub
πŸ“’ FeedbackAny feedback, issues, or just a πŸ‘ / πŸ‘Ž ?
⏰ Last Updated2023-04-26

Architecture

Architecture
The architecture diagram above shows a three-tier application with an Application Load Balancer (ALB) forwarding traffic to the frontend ECS Service. The frontend ECS service serves the webpage content to the browser and also acts as a router, communicating to the backend ECS service that executes the business logic. The backend ECS service in turn uses a DynamoDb table to insert/update/delete data. In this case, the frontend and backend are separate microservices that are maintained using their own CI/CD pipelines and Git repositories. The frontend communicates with the backend using a user-friendly DNS domain name that is created as part of service discovery. The service discovery feature on Amazon ECS manages the DNS records during scale-in and scale-out events. The infrastructure above allows us to deploy each microservice as an independent unit, even though the applications depend on each other for functionality and can be accessed individually.
In the next section, you will build and deploy the above architecture using AWS Copilot. You will use a sample β€œTodo” application. It has a UI built using ReactJs and is served from a frontend service. Todo service allows you to organize your work and life, and that is stored on the DynamoDB table for subsequent access of the service. It also lets you add more to-dos, and also delete the ones you don’t need. The frontend service also behaves as a router to the backend service using a Nginx proxy server for the requests received from the LoadBalancer. The backend service does the actual business logic of managing the data that is stored in the DynamoDB table. The Service discovery associated with the backend service maintains the DNS records within the Route53 private hosted zone. This is essential for maintaining the correct DNS records for the frontend service to access the backend service using a user-friendly URL, since the internal IP addresses of the backend service can change due to scaling events. Both frontend and backend are deployed within the same VPC and placed within the private subnets. The Application Load Balancer is placed within the public subnets. All the infrastructure that is required for this functionality to work can be created using AWS Copilot.

Prerequisites

There are some prerequisites required for you to proceed in this post. Make sure you have the below items configured to proceed with the next sections on this post:
  1. An active AWS Account
  2. AWS CLI installed and configured to interact with your account
  3. Git client installed
  4. Docker installed and running on your workstation

Initial Setup

In this section, you will do the initial setup of pulling down the sample application code, initialize the application and environment on to your terminal as a starting point requirement for using Copilot. We will start with this sample application where all the code is in a single, mono-repo, deploy the frontend and backend service as a separate unit, and then move the code into its own repository with a CI/CD process. If you have not already set up the Copilot CLI, please ensure you have installed it.

1. Download copy of sample application

Pull down the sample application to your local workstation and change the directory to the code location:
The code directory structure should look like this (if you interested, this is generated using tree using tree -d):

2. Navigate into the backend folder

Make sure you are within the backend folder on your terminal. If not, run the below command to change directory:
The folder structure should look like this:

3. Initialize an application using copilot

The first step in using Copilot is to initialize a new application using our existing code. Run this command to initialize the backend with Copilot:
As a result of executing the command above, you should see something similar to this:
When you run copilot app init, Copilot creates an IAM role using CloudFormation to manage services and jobs. Copilot also registers the /copilot/applications/todo parameter to the Systems Manager Parameter Store. If you run copilot app ls, for example, Copilot would check the Parameter Store and notify you of all applications in the AWS Region.
You can see the newly created copilot directory. This directory saves the manifest files.

4. Set up a development environment

Now that we have initialized an application, we need somewhere to deploy it to. Copilot allows defining multiple environments, and we will be setting up one called development. Run a copilot env init command to create a development environment. Follow the interactive terminal and provide the inputs. You will be asked a few questions, and you should provide the following answers:
  • What is your environment's name? Enter development.
  • Which credentials would you like to use to create development? Choose the profile for your development account and press Enter.
  • Would you like to use the default configuration for a new environment? Select Yes, use default. and press Enter.
This command generates a yaml file in copilot/environments/development/manifest.yml. This manifest file contains the configuration for the environment that Copilot set up.
When creating an environment, Copilot registers the of it in the AWS Systems Manager Parameter Store, and creates an IAM Role to manage the CloudFormation stack it creates for all infrastructure. You can see the progress in the terminal. When the environment creation is complete, you will see the output on the screen:
Run the copilot env ls command and check if the development environment has been created. You should see an output like this:

5. Set up required infrastructure in the development environment

Now that we have our Copilot application initialized, and an environment set up, it is time to set up the required infrastructure to run our containerized applications. Run the copilot env deploy command, it should only prompt you for inputs if you've used Copilot before and have set up some applications in the account you specified in the initialize command. If you only set up a single environment, it will default to using it. It will now proceed to create the VPC, Public Subnets, Private Subnets, Route53 Private HostedZone for Service discovery, custom route table, Security Group for containers to talk to each other, and an ECS Cluster to group the ECS services. When the environment creation is complete, you will see this output:
We now have everything ready to deploy our first service. In the next sections, we will deploy the infrastructure and the application. We will deploy the backend first, and then our frontend.

Set Up the Backend Service

We're now ready to deploy our backend. To illustrate practically how to split a monolith, we will be deploying the existing code from our backend folder inside the mono-repo, create the DynamoDB table it requires, and then move it to a new repo. Once the code is committed to the new repo, we will set up a CI/CD pipeline that will automatically build and deploy any future changes we make to it.

1. Initialize the backend service

In this step, you will create an ECS service that hosts the backend application within the ECS tasks. As pictured in the architecture diagram, the backend service also needs a DynamoDB table for database. You will be creating that as well in this step via Copilot.
Run the copilot svc init command to create a manifest file which defines your backend service.
You will be asked a few questions and should provide the following answers:
  • Which service type best represents your service's architecture? Select Backend Service and press Enter.
  • What do you want to name this service? Enter backend.
  • Which Dockerfile would you like to use for backend? Select ./Dockerfile and press Enter.
This command creates an ECR repository to save Docker images for the backend service and creates a copilot/backend directory to save a manifest file. When the backend service creation is complete, you will see this output:
This will only create the ECR repository for the container image, and not build the image, or deploy it yet. The copilot svc init command created a manifest file within the copilot/backend/manifest.yml file. to ensure we can monitor if our container is healthy, we need to add a healthcheck. This is an endpoint that is periodically accessed to report if the application is healthy. It can run a series of internal checks, or just respond with an HTTP "OK" (HTTP response 200). To add a healthcheck, modify copilot/backend/manifest.yml, and add the healthcheck section shown below in the image section directly below port:
If you are curious how the healthcheck was implemented in the backend, take a look at the route added in app.py:

2. Create a DynamoDB table

In this step, you will be creating the DynamoDB table required by the backend service using Copilot. The backend service uses a DynamoDB table to save to-do items. You can add storage resources to services with the copilot storage init command.
Run the copilot storage init command to create a DynamoDB table to be used by the backend service.
You will be asked a few questions and should provide the following answers:
  • What type of storage would you like to associate with backend? Select DynamoDB and press Enter.
  • What would you like to name this DynamoDB Table? Enter todotable.
  • Do you want the storage to be created and deleted with the backend service? Select Yes, the storage should be created and deleted at the same time as backend
  • What would you like to name the partition key of this DynamoDB? Enter TodoId.
  • What datatype is this key? Select Number and press Enter.
  • Would you like to add a sort key to this table? Enter n.
By running the copilot storage init command, Copilot creates a copilot/backend/addons directory and a CloudFormation template: todotable.yml in the directory. Copilot uses this template to create additional resources. Once the CloudFormation template has been created, you should see the following output:
The above setup also creates an environment variable in the ECS Task definition with the name TODOTABLE_NAME. If you look at the app.ts file, the application is already equipped to use this environment variable with the following line of the code:

3. Deploy the backend service

Run the copilot svc deploy command to deploy the backend service to the development environment automatically. You have not built the Docker image, so it will take some time to build a container image. You can see the progress on the terminal.
After the deployment is complete, you should see this output:
After deployment, run copilot svc status to see the service status. If the status is ACTIVE, then your container is running normally.
For the backend service, Copilot also sets up service discovery. Service discovery uses AWS Cloud Map API actions to manage HTTP and DNS namespaces for your Amazon ECS services to enable other services to send requests to your services. COPILOT_SERVICE_DISCOVERY_ENDPOINT is a special environment variable that the Copilot CLI sets for you when it creates the service. The format is {env name}.{app name}.local and requests to /api are passed to http://backend.development.todo.local:10000/. The endpoint backend.development.todo.local resolves to a private IP address and is routed privately within your VPC to the backend service. When configuring the frontend service, you will be using the COPILOT_SERVICE_DISCOVERY_ENDPOINT environment variable to access the backend service using service discovery. More information about Service Discovery can be found here.

4. Set up the CI/CD pipeline for backend

We are now ready to move our backend service's code to its own repository. First, let's move the code to a new, empty directory. First, change back to the root directory containing the originally cloned automate-container-microservices-aws-copilot directory, and then run:
The todobackend folder should now contain the following:
We are now ready to initialize this new directory as a git repository, create a hosted git repository, and set our local git repository to use the new remote one. We will be using an AWS CodeCommit repository to illustrate this approach. Use the following commands to change into the backend source directory, set up the new CodeCommit repo, initialize the local directory as a git repository, and then set it to use the remote CodeCommit repository:
Next, run copilot pipeline init to generate a pipeline manifest - this will create the configuration for the pipeline in the manifest file, but not yet set it up. You will notice the $(aws configure get region) in the URL, this will get the region that your AWS CLI is set up to use to ensure we create the CodeCommit repository in the same region as the rest of our infrastructure. You will also see an error message between the command you entered and where it is asking for input with the following:
This is due to git still using master as the branch name, and we changed it to use main with the git switch -c main command, but have yet to commit anything. Copilot will default to using main since it is the currently selected branch, so you can safely ignore this error. Let's continue with the prompts to set up our CI/CD pipeline. You will be asked a few questions and should provide the following answers:
  • What would you like to name this pipeline? Type todobackend-main and press Enter.
  • What type of continuous delivery pipeline is this? Move to Workloads and press Enter.
  • Which environment would you like to add to your pipeline? Move to development and press Enter.
  • Which environment would you like to add to your pipeline? Move to No additional environments and press Enter. (If you have never used Copilot, this option will not be shown, and default to the development environment)
Then you should see this output:
We are now ready to create our pipeline, run copilot pipeline deploy to create it. This command uses the CodeCommit repository that was was added as a remote git repository as a source stage to CodePipeline. Once this command succeeds, you should see the output on the terminal, like this:
Finally, push the local code to CodeCommit repository:
At this point, you can go to the CodePipeline console in your AWS account to see the pipeline running. Any future changes to the backend application can be made within the git repository, and pushing it to the remote repository should automatically deploy your code via CodePipeline.

Set Up the Frontend Service

We will now set up the frontend service. Since we already set up the Copilot application and environment while configuring the backend service in the previous section, we won't need to create those again, and just use the existing ones. We will follow the same approach of first creating and deploying the frontend, then move it to its own repository, and then set up the CI/CD pipeline to deploy any changes made to that repository.

1. Initialize the frontend service

To deploy our frontend, first, navigate back to the code/frontend directory in the automate-container-microservices-aws-copilot. The folder structure should look like this:
Execute copilot app init to initialize the frontend application. Copilot will detect the previous application and environment we created. You will be asked a few questions and should provide the following answers:
  • Would you like to use one of your existing applications? (Y/n) Choose Y.
  • Which existing application do you want to add a new service or job to? [Use arrows to move, type to filter, ? for more help] Choose todo.
You should see these results:

2. Initialize the frontend service

We are now ready to initialize and deploy frontend service. You may have noticed that we skipped the step to set up the environment, and that is intentional as Copilot will detect if there are any configured. If you have used Copilot before, you will be presented with a list to choose from. If this is your first time, it will detect there is only the development environment, and default to using it.
To initialize our frontend service, run the copilot svc init command to create a manifest file that defines it. You will be asked a few questions and should provide the following answers:
  • Which service type best represents your service's architecture? Select Load Balanced Web Service and press Enter.
  • What do you want to name this service? Enter frontend.
  • Which Dockerfile would you like to use for backend? Select .Dockerfile and press Enter.
Similar to the backend service, this will create an ECR repository to store Docker images for the frontend service, and create the manifest file in copilot/frontend. When completed, you should see similar output:

4. Deploy the frontend service

We're now ready to deploy the frontend, let's run copilot svc deploy to deploy it to the development environment. We have not built a docker image yet, so it will take a bit of time to do so. The output from the command will keep you updated with the progress, and any errors it encountered. Once the deployment is done, you should see the following:
You can now access the todo app using a browser and the URL that was returned by the deployment. To see more details about the services, you can run copilot svc status, and select which service you want the status of. To confirm that the container deployed successfully, select the frontend service, and look for the RUNNING status. You should see output similar to this:
At this point, we have deployed both the frontend and backend services, and you should see the service running opening a browser and using the output URL. It should look something like this:
frontend-app-ui
If you are wondering how the frontend service knows about the backend service, then take a look at the Nginx configuration file: backend.conf.template. Requests to /api are passed to backend.${COPILOT_SERVICE_DISCOVERY_ENDPOINT}. COPILOT_SERVICE_DISCOVERY_ENDPOINT is a special environment variable that the Copilot CLI sets for you when it creates your service. It uses the format {env name}.{app name}.local, and requests to /api are passed to http://backend.development.todo.local:10000/. The endpoint backend.development.todo.local resolves to a private IP address and is routed privately within your VPC to the backend service. More information about Service Discovery can be found here.
We still have one more task left, and that is to set up the CI/CD pipeline to automatically deploy any changes to the frontend for us.

5. Setup CI/CD pipeline for frontend

Similar to the backend, we will be moving the frontend code to its own repository, and use CodeCommit again. Please make sure you are in the directory that contains the automate-container-microservices-aws-copilot directory, and run the following commands:
The todobackend folder should now contain the following:
We are now ready to set up the new repository with the following commands:
Similar to before, we use $(aws configure get region) to inject the region our AWS CLI is set up to use so it matches the rest of our infrastructure.
We can now generate our pipeline manifest by running copilot pipeline init. You will be asked a few questions and should provide the following answers:
  • What would you like to name this pipeline? Type todofrontend-main and press Enter.
  • What type of continuous delivery pipeline is this? Move to Workloads and press Enter.
  • Which environment would you like to add to your pipeline? Move to development and press Enter.
  • Which environment would you like to add to your pipeline? Move to No additional environments and press Enter. This will only be asked if you have used Copilot before and created addition environments.
You should see the following output (and ignoring the error regarding which branch to use as before):
Next, run copilot pipeline deploy to create the pipeline. This command uses the CodeCommit repository that was was added as a remote git repository as the source stage to CodePipeline. Once this command succeeds, you should see this output:
Finally, push the local code to CodeCommit repository with:
Note: Make sure you push the Copilot directory with all its contents.
At this point, you can go to the AWS CodePipeline console in your AWS account to see the pipeline running. Any future changes to the front application can be made within the git repository, and pushing it to the remote repository should automatically deploy your code via CodePipeline.

Conclusion

Congratulations! You have just deployed a two service microservice application using Copilot! I hope you enjoyed how much easier AWS Copilot makes it to deploy microservices that are well architected and applied with AWS best practices, along with CI/CD pipelines to deploy any changes. This tutorial illustrated how to set up two microservices, and how they communicate with each other, but you can add as many services as you need by following what you learnt. You can continue to use the application we just deployed, or remove it to avoid incurring a monthly running cost.

Clean up

To remove all the resources created in this tutorial, please run the following commands:
  1. In the todofrontend run copilot app delete and press y and enter to confirm. This will delete the backend, frontend, CI/CD pipelines, and the development environment. You should see the following output once it completed successfully after a few minutes:
  2. Run the following two commands to delete the CodeCommit repositories created:
If you enjoyed this tutorial, found an issues, or have feedback for us, please send it our way!

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments