Automate your container deployments with CI/CD and GitHub Actions
Learn how to test and deploy a containerized Flask app to the cloud with CI/CD with GitHub Actions.
Jenna Pederson
Amazon Employee
Published Dec 16, 2022
Last Modified Mar 7, 2024
You've built out the first version of your Flask web app and even containerized it with Docker so your developer teammates can run it locally. Now, it's time to figure out how to deploy this container into the world! There are two key goals you want to accomplish with your deployment: first, you want your app to stay current, deploying whenever you or your teammates push a new feature up to the repo; second, you want to make sure your code is high-quality and immediately valuable to customers. To deliver on these goals, you'll need to create a simple CI/CD pipeline to deploy our container to infrastructure in the cloud.
For the CI/CD pipeline, we'll use GitHub Actions to create a workflow with two jobs. The two jobs below will be triggered when we push code to the main branch of our code repo:
- a test job to run unit tests against our Flask app and
- a deploy job to create a container image and deploy that to our container infrastructure in the cloud.
First, we'll configure the containerized Flask app to run in the cloud. Then, we'll create this infrastructure using the AWS CDK, an infrastructure as code framework that let's us create infrastructure with higher-level programming languages like Python. Finally, we'll set up our GitHub Actions workflow to test and deploy our app.
We're only deploying one container today, but this solution would work if you're running multiple containerized services.
Below is an architecture diagram of what we'll be building today:
Let's get started!
To work through these examples, you'll need a few bits set up first:
- An AWS account. You can create your account here.
- The CDK installed. You can find instructions for installing the CDK here. Note: For the CDK to work, you'll also need to have the AWS CLI installed and configured or setup the
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
, andAWS_DEFAULT_REGION
as environment variables. The instructions above show you how to do both. - Docker Desktop installed. Here are the instructions to install Docker Desktop.
Just want the code? Grab the CDK code to create the infrastructure here or the Flask app, container configuration, and GitHub Actions workflow here.
Before we provision the infrastructure, let's review our containerized Flask app and configure it to run in the cloud.
First, clone the repository using the
start-here
branch:We have a containerized Flask app with the following file structure:
In the main application file,
app.py
, there is one route that reverses the value of a string passed on the URL path and returns it:And we've implemented one test in the
app_test.py
file to ensure our business functionality of reversing that string value is working:We've already got a
Dockerfile
to set up our container image. This file is a template that gives instructions to Docker on how to create our container. The first line starting with FROM
bases our container on a public Python image and then from there we customize it for our use. We've set up the working directory (WORKDIR
), copy application files into that directory on the container (COPY
), install dependencies (RUN pip install
), open up port 8080 (EXPOSE
), and run the command to run the app (CMD python
).Let's run this app locally to make sure it works before we start configuring it for the cloud.
Make sure Docker Desktop is running and then, from the
hello-flask
project directory, run the following command to build the container image:When this is complete, you'll have a container image. Next, we'll run the command to run a container based on that image:
You should now be able to point your browser to
http://localhost:8080/hello-world
and see that it returns dlrow-olleh
.If that doesn't work, check to make sure your container is running, using the command
docker system df
:You can also use Docker Desktop Dashboard to see if the container is running.
You can also make sure your unit test is working by running the following commands in your project directory:
The first two commands install your app dependencies and the python test command. The third command runs your tests.
Now that we know our app works locally, we need to configure it for the cloud. We'll use Amazon ECS with AWS Fargate for our container orchestrator, but there are others you could use as well. Using Fargate, instead of Amazon EC2 instances, means we won't have to manage servers and it will manage scaling containers up and down for us.
To configure our app, we'll create the
task-definition.json
file that ECS needs. This is a blueprint for our application. We can add multiple containers (up to 10) to compose our app. Today, we only need one. Using the code below, you'll replace YOUR_AWS_ACCOUNT_ID
with your own AWS account ID.To provision our infrastructure resources, we'll use the CDK, but you could use other Infrastructure as Code frameworks like Terraform or CloudFormation.
Now, we're ready to create our CDK app that will create the infrastructure where we'll deploy our app.
First, we'll initialize a CDK app that uses Python. In a new project directory, ``
This creates the file structure shown below. Today, we'll be hanging out in the
ecs_devops_sandbox_cdk/ecs_devops_sandbox_cdk_stack.py
file.Copy the code below and replace the contents of this file. We'll walk through the various parts next.
The code below sets up our VPC and a task execution role so that the ECS task can pull our container image from Amazon ECR, the repository where we'll store our container images.
Next, we need a place to put our container images after we've built them so that they can be deployed. The code below creates a private ECR repository.
This code creates an ECS Cluster. A cluster is a logical grouping of tasks or services.
Next, this code creates a simple task definition so our infrastructure will start up (without our app deployed the first time) and adds a container to the task definition.
Finally, there is a service, which allows you to run a specified number of instances of that task definition.
If any of the instances fails or stops, ECS will launch another instance of your task definition to replace it and maintain the desired count of tasks in the service. We always want at least one container running, so we'll use a service. If we were running a one-time or scheduled job, we could omit the service as we wouldn't need to keep it running or restart it.
💰💰💰 Note: In the sample code we copied, we are using Option 1. Option 2 (commented out) creates a load balancer and related AWS resources using the
ApplicationLoadBalancedFargateService
construct. Both of these options create resources with non-trivial costs if left provisioned in your account, even if you don't use them. Be sure to clean up your resources (cdk destroy
) after working through this exercise.Now that we've created our CDK stack of resources, we can deploy it to create the infrastructure in the cloud.
First, activate the python environment and install dependencies like this:
Then, run this command to deploy your stack of resources:
Once the CDK deploy finishes, let's navigate to ECS (Elastic Container Service) -> Clusters in the AWS Console to check that the cluster we created has one running task and service as in the image below.
Now that we've created the infrastructure, we need to deploy our application container to the infrastructure.
We need a tool to implement our CI/CD pipeline to build and test our app and deploy it to that infrastructure. Today, we'll use GitHub Actions. We could even build and test our infrastructure code and deploy it with GitHub Actions, but we'll save that for another day.
GitHub Actions provide a way to implement complex orchestration and CI/CD functionality directly in GitHub by initiating a workflow on any GitHub event like a push to a branch or a merge to main or even adding a label every time an issue is opened.
The image below shows the parts of a workflow. A workflow runs one or more jobs that runs inside it's own runner or container. Each job has a series of steps and each step runs a specific action or script.
An action can be published on the GitHub Marketplace, either created by GitHub or published by someone else. For example:
- Checkout code - an action created by the GitHub organization
actions/checkout@v3
- Configure aws credentials - an action on the marketplace created by AWS
aws-actions/configure-aws-credentials@v1
We can also run a script like:
docker build
ordocker push
This let's us string multiple actions together to build, test, package up, and deploy our app. Each step is dependent on prior steps, so if the checkout code step isn't successful, we won't run the unit test step. Then we can set each job to be dependent on the previous job, so if the test job fails, the deploy job won't run and deploy broken code.
We could create our workflows directly from the GitHub UI, using one of the starter workflows in GitHub (in your repository, go to Actions -> New workflow - Choose a workflow). Today, we're using a customization of the starter workflows for testing a python app and for deploying to ECS.
Let's cover the steps to create this workflow. We'll be setting up:
- one workflow
- that triggers when there's a push to the main branch
- with two jobs, a test job and a deploy job
- the deploy job will depend on the test job, so if our tests fail, the deploy will not happen
In your Flask application repo, you'll create the
.github/workflows
directory and add the code below to this file, test-deploy.yml
.If you're following along with the sample code, you shouldn't have to change anything in the code above. Let's review the different parts of the workflow we created:
- Lines 3-6: This tells GitHub to trigger this workflow when there is a push to the main branch
- Lines 8-16: This sets up some environment variables to be used throughout the workflow
- Lines 18-19: This adds read permission to the contents of the repo for all jobs
- Lines 23-45: Configures the test job
- Line 27: Checks out the code
- Lines 28-31: Sets up Python with a specific version
- Lines 32-36: Installs dependencies
- Lines 37-42: Lints the code to check for syntax errors, stopping the build if any are found
- Lines 43-45: Runs the unit tests
- Lines 47-95: Configures the deploy job
- Line 50: Indicates that this job depends on a successful run of the test job
- Lines 54-55: Checks out the code
- Lines 57-62: Uses an external workflow
aws-actions/configure-aws-credentials@v1
to configure our AWS credentials with the environment variables we set earlier and our access key ID and secret access key (that we'll set up in the next step) - Lines 64-66: Using an external workflow
aws-actions/amazon-ecr-login@v1
, logs in to ECR using the AWS credentials we just configured - Lines 68-79: Builds, tags, and pushes our container image to ECR
- Line 71: Uses an output from the previous step as the registry to use
- Lines 77-79: Runs the docker commands to build, tag, and push the image
- Lines 81-87: Using
aws-actions/amazon-ecs-render-task-definition@v1
, updates the ECS task definition with the values set in environment variables - Lines 89-95: Uses
aws-actions/amazon-ecs-deploy-task-definition@v1
to deploy the task definition to the ECS cluster
For more details on how to configure a workflow, checkout Creating and managing GitHub Actions workflows.
Before we commit and push all these changes to GitHub, we need to set up our access key ID and secret access key for AWS in our repo. You'll probably want to create an IAM user specific to this task.
In the AWS console, navigate to the IAM service and create a new user with a user name like
github-actions-user
, making sure to give it programmatic access. Then attach the policy below, replacing the placeholder values (<YOUR_AWS_ACCOUNT_ID> and <YOUR_AWS_REGION>) with your AWS account ID and the region you are using:Once the user is created, note down the
AWS ACCESS KEY ID
and the AWS SECRET ACCESS KEY
to use in the next step. Treat these like a username and password. If you lose the AWS SECRET ACCESS KEY
, you’ll need to generate a new one.For more details on creating an IAM user, following these instructions to create an IAM user.
Back in the GitHub UI, navigate to the Settings of your
hello-flask
repo. Then go to Secrets -> Actions in the menu. Select New Repository Secret to create a new one. Add both the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and their corresponding values from the previous step.Once that is complete, commit and push your changes in the
hello-flask
repo to GitHub. The workflow will kick off momentarily. Let's go check it out!The workflow we just created should be running now. It was automatically started because of the trigger we configured earlier -- to start when there are new changes pushed to the main branch of our code repo. Navigate over to the Actions tab in the GitHub UI for
hello-flask
to see that the test job has kicked off, as in the image below.Pretty quickly we can see the tests pass and the Deploy job kicks off, as in the next image.
That will run for a few minutes, but eventually the full workflow will complete successfully, as in the image below.
At this point, our app should be deployed to ECS!
Let's go take a look at our ECS cluster to see if our app was deployed.
Looking at the service, we can see that we have one task running. And we can look at the Events tab to see the history of what's happened.
Above, we see one task was stopped. A service deployment was completed. And the service has reached a ready state.
We have success!
If you're done using the cloud resources we created in this project, you can destroy them now to ensure you are not billed for their use. To do that, navigate back to the
ecs-devops-sandbox-repository
project at the command line and run:In this post, you learned how to use the CDK to provision infrastructure to deploy your Flask app to an ECS cluster. Then you learned to create a simple CI/CD pipeline with GitHub Actions, setting up a workflow to test and deploy your app to that infrastructure.
Next, you might consider exploring other ways to use containers in the cloud or creating more complex CI/CD pipelines to automate more of your infrastructure and application.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.