Automate your container deployments with CI/CD and GitHub Actions
Learn how to test and deploy a containerized Flask app to the cloud with CI/CD with GitHub Actions.
- a test job to run unit tests against our Flask app and
- a deploy job to create a container image and deploy that to our container infrastructure in the cloud.
- An AWS account. You can create your account here.
- The CDK installed. You can find instructions for installing the CDK here. Note: For the CDK to work, you'll also need to have the AWS CLI installed and configured or setup the
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
, andAWS_DEFAULT_REGION
as environment variables. The instructions above show you how to do both. - Docker Desktop installed. Here are the instructions to install Docker Desktop.
start-here
branch:1
$ git clone https://github.com/jennapederson/hello-flask -b start-here
app.py
, there is one route that reverses the value of a string passed on the URL path and returns it:1
2
3
4
5
6
7
8
9
10
11
"""Main application file"""
from flask import Flask
app = Flask(__name__)
def returnBackwardsString(random_string):
"""Reverse and return the provided URI"""
return "".join(reversed(random_string))
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
app_test.py
file to ensure our business functionality of reversing that string value is working:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
"""Unit test file for app.py"""
from app import returnBackwardsString
import unittest
class TestApp(unittest.TestCase):
"""Unit tests defined for app.py"""
def test_return_backwards_string(self):
"""Test return backwards simple string"""
random_string = "This is my test string"
random_string_reversed = "gnirts tset ym si sihT"
self.assertEqual(random_string_reversed, returnBackwardsString(random_string))
if __name__ == "__main__":
unittest.main()
Dockerfile
to set up our container image. This file is a template that gives instructions to Docker on how to create our container. The first line starting with FROM
bases our container on a public Python image and then from there we customize it for our use. We've set up the working directory (WORKDIR
), copy application files into that directory on the container (COPY
), install dependencies (RUN pip install
), open up port 8080 (EXPOSE
), and run the command to run the app (CMD python
).1
2
3
4
5
6
7
8
9
10
11
12
FROM python:3
# Set application working directory
WORKDIR /usr/src/app
# Install requirements
COPY requirements.txt ./
RUN pip install —no-cache-dir -r requirements.txt
# Install application
COPY app.py ./
# Open port app runs on
EXPOSE 8080
# Run application
CMD python app.py
hello-flask
project directory, run the following command to build the container image:1
$ docker build -t hello-flask .
1
$ docker run -dp 8080:8080 --name hello-flask-1 hello-flask
http://localhost:8080/hello-world
and see that it returns dlrow-olleh
.docker system df
:1
2
3
$ pip install -r requirements.txt
$ pip install pytest
$ pytest
task-definition.json
file that ECS needs. This is a blueprint for our application. We can add multiple containers (up to 10) to compose our app. Today, we only need one. Using the code below, you'll replace YOUR_AWS_ACCOUNT_ID
with your own AWS account ID.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
"requiresCompatibilities": [
"FARGATE"
],
"inferenceAccelerators": [],
"containerDefinitions": [
{
"name": "ecs-devops-sandbox",
"image": "ecs-devops-sandbox-repository:00000",
"resourceRequirements": null,
"essential": true,
"portMappings": [
{
"containerPort": "8080",
"protocol": "tcp"
}
]
}
],
"volumes": [],
"networkMode": "awsvpc",
"memory": "512",
"cpu": "256",
"executionRoleArn": "arn:aws:iam::YOUR_AWS_ACCOUNT_ID:role/ecs-devops-sandbox-execution-role",
"family": "ecs-devops-sandbox-task-definition",
"taskRoleArn": "",
"placementConstraints": []
}
1
$ cdk init --language python
ecs_devops_sandbox_cdk/ecs_devops_sandbox_cdk_stack.py
file.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
import aws_cdk as cdk
import aws_cdk.aws_ecr as ecr
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_ecs as ecs
import aws_cdk.aws_iam as iam
# import aws_cdk.aws_ecs_patterns as ecs_patterns
class EcsDevopsSandboxCdkStack(cdk.Stack):
def __init__(self, scope: cdk.App, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
ecr_repository = ecr.Repository(self,
"ecs-devops-sandbox-repository",
repository_name="ecs-devops-sandbox-repository")
vpc = ec2.Vpc(self,
"ecs-devops-sandbox-vpc",
max_azs=3)
cluster = ecs.Cluster(self,
"ecs-devops-sandbox-cluster",
cluster_name="ecs-devops-sandbox-cluster",
vpc=vpc)
execution_role = iam.Role(self,
"ecs-devops-sandbox-execution-role",
assumed_by=iam.ServicePrincipal("ecs-tasks.amazonaws.com"),
role_name="ecs-devops-sandbox-execution-role")
execution_role.add_to_policy(iam.PolicyStatement(
effect=iam.Effect.ALLOW,
resources=["*"],
actions=[
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
))
#
# Option 1: Creates service, container, and task definition without creating a load balancer
# and other costly resources. Containers will not be publicly accessible.
#
task_definition = ecs.FargateTaskDefinition(self,
"ecs-devops-sandbox-task-definition",
execution_role=execution_role,
family="ecs-devops-sandbox-task-definition")
container = task_definition.add_container(
"ecs-devops-sandbox",
image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
logging=ecs.LogDrivers.aws_logs(stream_prefix="ecs-devops-sandbox-container")
)
service = ecs.FargateService(self,
"ecs-devops-sandbox-service",
cluster=cluster,
task_definition=task_definition,
service_name="ecs-devops-sandbox-service")
# END Option 1
#
# Option 2: Creates a load balancer and related AWS resources using the ApplicationLoadBalancedFargateService construct.
# These resources have non-trivial costs if left provisioned in your AWS account, even if you don't use them. Be sure to
# clean up (cdk destroy) after working through this exercise.
#
# Comment out option 1 and uncomment the code below. Uncomment the aws_cdk.aws_ecs_patterns import at top of file.
#
# task_image_options = ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
# family="ecs-devops-sandbox-task-definition",
# execution_role=execution_role,
# image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
# container_name="ecs-devops-sandbox",
# container_port=8080,
# log_driver=ecs.LogDrivers.aws_logs(stream_prefix="ecs-devops-sandbox-container")
# )
#
# ecs_patterns.ApplicationLoadBalancedFargateService(self, "ecs-devops-sandbox-service",
# cluster=cluster,
# service_name="ecs-devops-sandbox-service",
# desired_count=2,
# task_image_options=task_image_options,
# public_load_balancer=True
# )
#
# END Option 2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
vpc = ec2.Vpc(self,
"ecs-devops-sandbox-vpc",
max_azs=3)
execution_role = iam.Role(self,
"ecs-devops-sandbox-execution-role",
assumed_by=iam.ServicePrincipal("ecs-tasks.amazonaws.com"),
role_name="ecs-devops-sandbox-execution-role")
execution_role.add_to_policy(iam.PolicyStatement(
effect=iam.Effect.ALLOW,
resources=["*"],
actions=[
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
))
1
2
3
ecr_repository = ecr.Repository(self,
"ecs-devops-sandbox-repository",
repository_name="ecs-devops-sandbox-repository")
1
2
3
4
cluster = ecs.Cluster(self,
"ecs-devops-sandbox-cluster",
cluster_name="ecs-devops-sandbox-cluster",
vpc=vpc)
1
2
3
4
5
6
7
8
9
10
task_definition = ecs.FargateTaskDefinition(self,
"ecs-devops-sandbox-task-definition",
execution_role=execution_role,
family="ecs-devops-sandbox-task-definition")
container = task_definition.add_container(
"ecs-devops-sandbox",
image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
logging=ecs.LogDrivers.aws_logs(stream_prefix="ecs-devops-sandbox-container")
)
1
2
3
4
5
service = ecs.FargateService(self,
"ecs-devops-sandbox-service",
cluster=cluster,
task_definition=task_definition,
service_name="ecs-devops-sandbox-service")
ApplicationLoadBalancedFargateService
construct. Both of these options create resources with non-trivial costs if left provisioned in your account, even if you don't use them. Be sure to clean up your resources (cdk destroy
) after working through this exercise.1
2
$ source .venv/bin/activate
$ pip install -r requirements.txt
1
$ cdk deploy
- Checkout code - an action created by the GitHub organization
actions/checkout@v3
- Configure aws credentials - an action on the marketplace created by AWS
aws-actions/configure-aws-credentials@v1
docker build
ordocker push
- one workflow
- that triggers when there's a push to the main branch
- with two jobs, a test job and a deploy job
- the deploy job will depend on the test job, so if our tests fail, the deploy will not happen
.github/workflows
directory and add the code below to this file, test-deploy.yml
.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
name: Test and Deploy
on:
push:
branches:
- main
env:
AWS_REGION: us-east-1 # set this to your preferred AWS region, e.g. us-west-1
ECR_REPOSITORY: ecs-devops-sandbox-repository # set this to your Amazon ECR repository name
ECS_SERVICE: ecs-devops-sandbox-service # set this to your Amazon ECS service name
ECS_CLUSTER: ecs-devops-sandbox-cluster # set this to your Amazon ECS cluster name
ECS_TASK_DEFINITION: task-definition.json # set this to the path to your Amazon ECS task definition
# file, e.g. .aws/task-definition.json
CONTAINER_NAME: ecs-devops-sandbox # set this to the name of the container in the
# containerDefinitions section of your task definition
permissions:
contents: read
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.10
uses: actions/setup-python@v3
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest
deploy:
name: Deploy
runs-on: ubuntu-latest
needs: [test]
environment: production
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
- Lines 3-6: This tells GitHub to trigger this workflow when there is a push to the main branch
- Lines 8-16: This sets up some environment variables to be used throughout the workflow
- Lines 18-19: This adds read permission to the contents of the repo for all jobs
- Lines 23-45: Configures the test job
- Line 27: Checks out the code
- Lines 28-31: Sets up Python with a specific version
- Lines 32-36: Installs dependencies
- Lines 37-42: Lints the code to check for syntax errors, stopping the build if any are found
- Lines 43-45: Runs the unit tests
- Lines 47-95: Configures the deploy job
- Line 50: Indicates that this job depends on a successful run of the test job
- Lines 54-55: Checks out the code
- Lines 57-62: Uses an external workflow
aws-actions/configure-aws-credentials@v1
to configure our AWS credentials with the environment variables we set earlier and our access key ID and secret access key (that we'll set up in the next step) - Lines 64-66: Using an external workflow
aws-actions/amazon-ecr-login@v1
, logs in to ECR using the AWS credentials we just configured - Lines 68-79: Builds, tags, and pushes our container image to ECR
- Line 71: Uses an output from the previous step as the registry to use
- Lines 77-79: Runs the docker commands to build, tag, and push the image
- Lines 81-87: Using
aws-actions/amazon-ecs-render-task-definition@v1
, updates the ECS task definition with the values set in environment variables - Lines 89-95: Uses
aws-actions/amazon-ecs-deploy-task-definition@v1
to deploy the task definition to the ECS cluster
github-actions-user
, making sure to give it programmatic access. Then attach the policy below, replacing the placeholder values (<YOUR_AWS_ACCOUNT_ID> and <YOUR_AWS_REGION>) with your AWS account ID and the region you are using:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecs:RegisterTaskDefinition",
"sts:GetCallerIdentity"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": "arn:aws:ecr:<YOUR_AWS_REGION>:<YOUR_AWS_ACCOUNT_ID>:repository/ecs-devops-sandbox-repository"
},
{
"Effect": "Allow",
"Action": [
"ecs:DescribeServices",
"ecs:UpdateService"
],
"Resource": [
"arn:aws:ecs:<YOUR_AWS_REGION>:<YOUR_AWS_ACCOUNT_ID>:service/default/ecs-devops-sandbox-service",
"arn:aws:ecs:<YOUR_AWS_REGION>:<YOUR_AWS_ACCOUNT_ID>:service/ecs-devops-sandbox-cluster/ecs-devops-sandbox-service"
]
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "arn:aws:iam::<YOUR_AWS_ACCOUNT_ID>:role/ecs-devops-sandbox-execution-role"
}
]
}
AWS ACCESS KEY ID
and the AWS SECRET ACCESS KEY
to use in the next step. Treat these like a username and password. If you lose the AWS SECRET ACCESS KEY
, you’ll need to generate a new one.hello-flask
repo. Then go to Secrets -> Actions in the menu. Select New Repository Secret to create a new one. Add both the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and their corresponding values from the previous step.hello-flask
repo to GitHub. The workflow will kick off momentarily. Let's go check it out!hello-flask
to see that the test job has kicked off, as in the image below.ecs-devops-sandbox-repository
project at the command line and run:1
$ cdk destroy