Deploying GitLab CE on AWS ECS with Fargate: A Step-by-Step Guide
This article focuses on hosting a private GitLab environment on AWS using Amazon ECS with the Fargate deployment type. Fargate enables a serverless container hosting model, presenting unique challenges in integrating various components seamlessly to ensure a fully functional setup.
Published Nov 30, 2024
Last Modified Dec 1, 2024
A privately hosted Source Code Management (SCM) system is crucial for many organizations, and GitLab stands out as an excellent choice. While Amazon ECS with Fargate may not be the ideal solution for hosting GitLab, it offers a valuable learning opportunity to work with serverless containers, especially when paired with EFS volumes for persistent storage. This project pushed the boundaries of containerized deployments, though it's worth noting that running GitLab on EC2 deployment type**** would be more straightforward and potentially more performant.
The solution I’ve implemented here, despite the complexity with Fargate, can be adapted for EC2-based deployments as well. Key components like VPC setup, security groups, PostgreSQL on RDS, and Redis clusters on ElastiCache are relevant regardless of whether you deploy on Fargate or EC2.
I like to implement Infrastructure as Code (IaC) and efficient CI/CD pipelines in everything I do, so I used Terraform and GitHub Actions for this project. If you’d like to replicate this, feel free to check out my GitHub repository for more details.
Prerequisites:
- Basic knowledge of AWS services (ECS, ECR, RDS, ElastiCache)
- Familiarity with Docker and Terraform
- An AWS account set up for deployment
- ECS Cluster, Service, and Task Definition
- ECR Container Registry
- VPC, Public Subnets, Private Subnets, Subnet Groups, Route Tables, Internet Gateway
- Necessary Security Groups and IAM roles & policies
- Elastic File System (EFS) Volumes
- ElastiCache Cluster for Redis
- Amazon RDS for PostgreSQL database
- Elastic Load Balancer, listeners, and target groups
- S3 Bucket for backups and required bucket policies
When setting up GitLab, PostgreSQL and Redis are critical components. While it's possible to use the embedded database and caching for smaller-scale setups, I opted for external PostgreSQL and Redis for scalability, high availability, and integration with managed AWS services.
n my setup, all you need to do is go to the terraform directory and then initiate terraform, plan and apply. Make sure your aws setup is correct. You can setup aws cli from this documentation
then its the three main commands
terraform init
terraform plan
terraform apply
Once, your infrastructure is up, you need to make some changes inside the RDS database, if you need external access, you can attach an internet gateway to the private subnet of your database for a while.You can use psql as the client for this next step.
Once done, you can now remove the internet gateway from the subnets route table.Next step is to configure all the settings for Gitlab we want, and all we need to do is create one configuration file i.e gitlab.rb file. This file is inside the docker/config folder in my setup.
For our deployment on AWS ECS with Fargate, we only need to configure a few key settings in the gitlab.rb file, primarily focusing on the Redis cluster and RDS database connection. we can leave most other settings at their default values.
Here's a sample gitlab.rb configuration to set up GitLab with an external Redis cluster and RDS database:
Now, we leave everything else to the magic that is automation. In my github actions workflow I have setup the pipeline to achieve the following things.
- Build and Push Docker Image: The workflow builds a Docker image for GitLab CE and pushes it to Amazon Elastic Container Registry (ECR).
- Update ECS Task Definition: It retrieves the current ECS task definition, updates it with the new image tag, and registers the updated task definition.
- Force New Deployment: Finally, it updates the ECS service to deploy the new version of GitLab CE.
Here's the sample:name: Build and Push to ECR
My way of authentication AWS for github actions is using AWS IAM Identity providers and OpenID Connect. I suggest this to everyone as it is very secure and robust. Meaning, I can configure it to make sure that only my organization, repo and branch can have access to aws.
The final step is to wait for the ecs task to run in your ECS cluster.
Our setup includes the cloudwatch log group for the tasks, make sure you look through the logs in case you encounter problems.
Then you can use your loadbalancer dns name to access your Gitlab! Allow some time for it to be fully up and running !
the default user is root and you can setup the initial password on the gitlab.rb file.
Thank you for making it to the end !
This is a costly project, remember to destroy everything with terraform if you are trying out.