AWS Logo
Menu
Deploy a Dockerized Flask Application to AWS ECS using CircleCI and Terraform

Deploy a Dockerized Flask Application to AWS ECS using CircleCI and Terraform

Tutorial of deploying a flask application to AWS ECS, and integrating with CircleCI CI/CD for Pipeline Automation

Published Oct 10, 2024
In today’s software development landscape, automating deployments and managing infrastructure as code are essential skills for developers and DevOps engineers. This tutorial will walk you through setting up a CI/CD (Continuous Integration/Continuous Deployment) pipeline to automate the process of building and deploying a simple web application. We’ll be using Docker to containerize a basic Flask application (a lightweight web framework for Python), CircleCI to automate the building and testing of our code, and Terraform to manage the infrastructure setup on AWS (Amazon Web Services).
Our goal is to deploy the Flask application to ECS (Elastic Container Service), a managed container orchestration service provided by AWS that makes it easy to run, stop, and manage containers. By the end of this guide, you’ll understand how these tools work together to create a fully automated, scalable, and resilient deployment pipeline. Whether you’re a beginner or have some experience with cloud services, this tutorial will help you get hands-on experience with modern DevOps practices.

Prerequisites

  1. AWS Account with permissions to manage ECS, ECR, IAM, and related services.
  2. CircleCI account, linked to your GitHub or Bitbucket repository.
  3. Terraform installed locally.
  4. Docker installed locally.
  5. Flask installed locally.
Step 1: Create a Simple Flask Application
We’re going to kick things off by getting our directory structure properly configured as we want to ensure that we have ALL of the necessary files. Below, you’ll find the files that I’ve used for this deployment (your deployment files should be similar):
We’ll start by adding our Python files first. Feel free to use whatever code you’d prefer for app.py, but here’s a simple snippet of what that may consist of. Note: You may want to ensure that Flask is installed locally or within your virtual environment, or you will most likely see a ‘problem’ indicated in your IDE, such as what we have below:
app.py
Simply install Flask if it is not installed if you see this ‘problem’
It’s important to note that whenever you’re working with “sensitive” values, you NEVER want to hard-code these config details directly within your code base. It’s a horrible security practice — flat out. Instead, you’ll want to use a file for your environment variables, such as the one that I have below (it is not required to use a Python env for this deployment, though it is optional if you choose to do so):
.env
With any Python application, it is normal to also have a requirements file as well as this helps with dependency management, though if your project is very small, you probably would not need this file. Nevertheless, below you’ll find our super-duper simple requirements file:
requirements.txt
If you’d like, you can run a quick test against the simple app locally by executing the file, and visiting the webpage in the browser using http://localhost:<port>:
Step 2: Dockerize the Flask Application
The Dockerfile is a crucial part of this deployment because it defines how the application is containerized. In the context of deploying a Flask application to AWS ECS using CircleCI and Terraform, here’s why the Dockerfile is important:
  • The Dockerfile specifies the environment in which the Flask application will run. It starts with a base image, such as python:3.9-slim, and installs all the necessary dependencies (like Flask) for the application to work. This ensures consistency across different environments (local, testing, staging, and production), as the container will always run the same code in the same environment.
  • It handles the installation of application dependencies, such as the Python packages listed in the requirements.txt file. This allows you to package everything needed for the application to run within the container, avoiding the problem of "it works on my machine" but fails elsewhere.
  • Since Docker containers can run on any platform that supports Docker, a Dockerfile makes your application portable and reproducible. It ensures that the application behaves the same way regardless of where it is deployed, whether on a developer’s laptop, a CI/CD server (like CircleCI), or a cloud environment (like AWS ECS).
  • By keeping the Dockerfile in version control along with the application code, changes to the deployment environment (like updates to dependencies or base images) are tracked. This helps maintain a history of changes and allows you to roll back to previous configurations if needed.
  • In our CI/CD pipeline, the Dockerfile serves as the blueprint for building the Docker image. CircleCI uses the Dockerfile to create an image of the Flask application, which is then pushed to a container registry (ECR). The same image is then pulled by the ECS service for deployment. This automation speeds up development workflows and minimizes manual configuration.
Step 3: Use Terraform to Set Up AWS Infrastructure
In this step, we’re going to use Terraform to create the necessary AWS resources: an ECS cluster, ECR repository, IAM roles, and networking configurations. Remember, you SHOULD NOT be hard-coding any of your environment variables within the TF configuration directly. It’s important to also be mindful that since we will be creating an ECR repo, KMS Key, and ECS Cluster & Service, you MUST ensure that you have the appropriate permissions to create these resources or you will not be able to apply the Terraform configuration locally. We’ll start by creating our main.tf file:
Next, we’ll dive into our ecs.tf config file. Remember, for any of the resources that you are wanting to create for ECS, please note that if you have previously used ECR images in your task definition, you may have an existing execution IAM role. Your Terraform configuration will not be applied successfully if there is an existing role in place. You have the option of using the existing role (if you have one), or creating a net-new role, such as what I have done below:
And finally, we’ll wrap things up with our last file for our terraform variables….NO, this is not the same file that we created earlier for Python. This is a totally separate, and different configuration type for Terraform specifically:
So, what are some of the key benefits of using Terraform for our deployment?
  • It enables the automatic creation, updating, and deletion of infrastructure, making deployments more efficient and reliable.
  • If something goes wrong, you can roll back changes to a previous version, just like you would with application code.
  • You can easily create and tear down test environments, which is especially useful for continuous integration/continuous deployment (CI/CD) workflows.
  • In this deployment, you can use CircleCI to trigger Terraform to provision resources or update configurations whenever changes are made to the infrastructure code. This results in a fully automated pipeline where changes to code and infrastructure are deployed together.
Alright, now I’m sure that by this point you’re anticipating some action, so let’s jump into the Terraform magic, and run the following commands. You should have at least eight (8) resources provisioned successfully:
Terraform Plan
Terraform Plan
Terraform Apply
Terraform Apply
Step 4: Configure CircleCI for your CI/CD Pipeline
With our AWS resources now deployed, and our sample Python application tested locally for verification purposes, we must now ensure that we configure a pipeline to manage our deployment. You’ll need to create a .circleci directory, and create a config.yml file within this directory. Without this file, your pipeline will NOT run, and you’ll miss out on all of the fun that’s to come!
Be sure that you have pushed ALL of your files to your designated repo at this time (the config.yml file will be needed to trigger the pipeline from your repo directly). This ‘push’ will actually trigger your workflow directly (which may fail initially), but you’ll have the ability to update the settings in a later step and re-trigger the workflow for a successful run:
Let’s break this config.yml file down a bit to understand exactly what we’re doing here:
Defines a Docker-based executor (docker-executor):
  • Uses a Docker image (circleci/python:3.9) to run the jobs, providing a Python environment where the commands will execute.
Build Job:
  • Checks out the code from the repository.
  • Sets up a remote Docker environment to enable building Docker images within the CircleCI environment.
  • Installs the AWS CLI to allow interaction with AWS services.
  • Logs in to Amazon ECR using the AWS CLI, enabling the pushing of Docker images to the registry.
  • Builds a Docker image for the Flask application using the provided Dockerfile.
  • Tags the Docker image with the repository and image tag.
  • Pushes the Docker image to ECR, making it available for deployment.
Deploy Job:
  • Installs the AWS CLI to perform AWS-specific operations.
  • Updates the ECS service to use the newly pushed Docker image, forcing a new deployment to pull the latest version of the image from ECR.
Workflows Section:
  • Defines a workflow named build_and_deploy that runs both the build and deploy jobs.
  • Ensures that the deploy job only runs after the build job completes successfully, creating a sequential pipeline.
Step 5: Updating the Circle CI Settings & Executing the Pipeline
With all of our code now pushed to our desired repo, we must ensure that we get things configured within the Circle CI console correctly. The assumption is that you have signed up for an account, and that you have integrated your desired GitHub/Bitbucket repo accordingly.
With your repo integrated, you must first ensure that you have setup your project, following the guidance in the console:
Be sure that you select the existing config file that you just pushed, or if you dare to be dangerous, create a net-new config.yml file:
**Once you’ve selected the correct config.yml file, DO NOT panic as you may see a pipeline trigger**
Navigate to > Project Settings > Environment Variables > Enter your environment variables that you created earlier, along with the AWS Access/Secret Access Key:
After you’ve successfully added all required env variables, navigate back to > Projects > Select Your Project > Trigger Pipeline:
From here, you can sit back and watch the the various phases of the pipeline execute (super neat!!), all of which should be successful if you have the CORRECT environment variables in place. We’ll start by showing some of the Build Job run details:
Build Job
Build Job
Next, let’s check out what was done for our Deploy Job:
Deploy Job
Deploy Job
ECS Service Updated
ECS Service Updated
End-to-End Success!
End-to-End Success!
GitHub Repo Pipeline Confirmation
GitHub Repo Pipeline Confirmation
Wrapping Things Up
I know you may be wondering if in fact we actually have AWS resources that were created via Terraform, and whether or not the pipeline actually updated the ECS Service…well, if you’ve followed along with my tutorial and your configuration is similar to the example code I’ve provided, you will see that all of the resources were in fact created successfully, and we can verify that we actually have an active cluster, container, active deployment, and so forth:
Flask ECS Cluster
Flask ECS Cluster
Flask Container
Flask Container
ECR Repo
ECR Repo
Flask Service
Flask Service
Dance Break: Did you all see that it ONLY took 48 seconds for this pipeline to run just ‘two’ workflows, end-to-end ? Now do you see why pipelines are everythiinnnngggg??!! Phew, it doesn’t get any better than that! Oh, and don’t forget to remove all AWS resources via terraform destroy!
Thanks for stopping by for the read! Follow me on LinkedIn if you’d like to see more interesting content like this! https://www.linkedin.com/in/katoria-henry-2018/
 

Comments