Extend Amazon ECS across two Outposts racks
Explore integrating Amazon ECS with AWS Outposts using this guide. Follow this script step-by-step for a seamless setup.
- Two logical Outposts: Ensure that two logical AWS Outposts are properly installed and configured in your on-premises environment.
- Adequate access rights: Necessary permissions should be in place for setting up and managing all configurations and security settings in this guide.
- Ability to run script-based commands: all configurations must be run as scripts rather than directly through the CLI. It's recommended to run the entire walkthrough as a single script in environment capable of running .sh files, provided it has the necessary permissions for successful execution. Note: This script has been tested on macOS and can be adjusted to suit different environments or requirements.
Disclaimer: Before executing the end-to-end script, please ensure that you carefully review and validate that the script aligns with your organization's security and corporate policies.
- Parameters: begin by setting the necessary variables for your AWS Region, VPC, subnets, and Outposts configuration. This will establish the framework for all subsequent commands.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
echo Set global variables
AWS_REGION=eu-central-1
VPC_CIDR="10.0.0.0/16"
PUBLIC_SUBNET_CIDR="10.0.1.0/24"
PRIVATE_SUBNET_CIDR="10.0.2.0/24"
PUBLIC_SUBNET_CIDR2="10.0.3.0/24"
PRIVATE_SUBNET_CIDR2="10.0.4.0/24"
OUTPOST_ARN="arn:aws:outposts:eu-central-1:XXXXXX:outpost/op-019XXXXXX00f"
OUTPOST_AZ1="eu-central-1a"
OUTPOST_ARN2="arn:aws:outposts:eu-central-1:XXXXXX:outpost/op-019XXXXXX00a"
OUTPOST_AZ2="eu-central-1b"
OUTPOST_EC2_TYPE="m5.xlarge"
LAB_NAME=my-ecs-op
EC2_TAG_ESPECIFICATIONS='{Key=Environment,Value="Lab"},{Key=Owner,Value="xxx@emailxxx.com"}'
ECS_TAGS="key=Environment,value=Lab key=Owner,value=xxx@emailxxx.com"
AS_TAGS="Key=Environment,Value=Lab Key=Owner,Value=xxx@emailxxx.com"
- VPC and subnets Configuration: Create a VPC and multiple subnets for public and private access, ensuring they are linked to your Outpost for local processing capabilities.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
echo Create VPC
VPC_ID=$(aws ec2 create-vpc --cidr-block $VPC_CIDR --region $AWS_REGION --tag-specifications ResourceType=vpc,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-vpc\"\}] --query 'Vpc.VpcId' --output text)
echo Create public subnet on AZ1
PUBLIC_SUBNET1_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PUBLIC_SUBNET_CIDR --availability-zone $OUTPOST_AZ1 --outpost-arn $OUTPOST_ARN --tag-specifications ResourceType=subnet,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-public-subnet1\"\}] --query 'Subnet.SubnetId' --output text)
echo Create private subnet AZ1
PRIVATE_SUBNET1_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PRIVATE_SUBNET_CIDR --availability-zone $OUTPOST_AZ1 --outpost-arn $OUTPOST_ARN --tag-specifications ResourceType=subnet,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-private-subnet1\"\}] --query 'Subnet.SubnetId' --output text)
echo Create public subnet on AZ2
PUBLIC_SUBNET2_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PUBLIC_SUBNET_CIDR2 --availability-zone $OUTPOST_AZ2 --outpost-arn $OUTPOST_ARN2 --tag-specifications ResourceType=subnet,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-public-subnet2\"\}] --query 'Subnet.SubnetId' --output text)
echo Create private subnet AZ2
PRIVATE_SUBNET2_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PRIVATE_SUBNET_CIDR2 --availability-zone $OUTPOST_AZ2 --outpost-arn $OUTPOST_ARN2 --tag-specifications ResourceType=subnet,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-private-subnet2\"\}] --query 'Subnet.SubnetId' --output text)
echo Create Internet Gateway
IGW_ID=$(aws ec2 create-internet-gateway --tag-specifications ResourceType=internet-gateway,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-igw\"\}] --query 'InternetGateway.InternetGatewayId' --output text)
echo Attach Internet Gateway to VPC
aws ec2 attach-internet-gateway --vpc-id $VPC_ID --internet-gateway-id $IGW_ID --query 'Return' --output text
echo Create route table for public subnet AZ1
PUBLIC_RT_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --tag-specifications ResourceType=route-table,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-public-rt\"\}] --query 'RouteTable.RouteTableId' --output text)
echo Create route to Internet Gateway for public subnet AZ1
aws ec2 create-route --route-table-id $PUBLIC_RT_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGW_ID --query 'Return' --output text
echo Associate public subnet AZ1 with public route table
aws ec2 associate-route-table --subnet-id $PUBLIC_SUBNET1_ID --route-table-id $PUBLIC_RT_ID --query 'Return' --output text
aws ec2 associate-route-table --subnet-id $PUBLIC_SUBNET2_ID --route-table-id $PUBLIC_RT_ID --query 'Return' --output text
echo Create NAT Gateway for private subnet AZ1
ALLOCATION_ID=$(aws ec2 allocate-address --domain vpc --tag-specifications ResourceType=elastic-ip,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-natgw-eip\"\}] --query 'AllocationId' --output text)
NAT_GW_ID=$(aws ec2 create-nat-gateway --subnet-id $PUBLIC_SUBNET1_ID --allocation-id $ALLOCATION_ID --tag-specifications ResourceType=natgateway,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-natgw\"\}] --query 'NatGateway.NatGatewayId' --output text)
echo "Waiting for NAT Gateway to become available..."
while true; do
STATUS=$(aws ec2 describe-nat-gateways --nat-gateway-ids $NAT_GW_ID --query 'NatGateways[0].State' --output text)
if [[ "$STATUS" == "available" ]]; then
echo "NAT Gateway is available."
break
elif [[ "$STATUS" == "failed" ]]; then
echo "Failed to create NAT Gateway."
exit 1
else
echo "Current status: $STATUS. Waiting..."
sleep 30
fi
done
echo Create route table for private subnet AZ1
PRIVATE_RT_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --tag-specifications ResourceType=route-table,Tags=[$EC2_TAG_ESPECIFICATIONS,\{Key=Name,Value=\"$LAB_NAME-private-rt\"\}] --query 'RouteTable.RouteTableId' --output text)
echo Create route to NAT Gateway for private subnet AZ1
aws ec2 create-route --route-table-id $PRIVATE_RT_ID --destination-cidr-block 0.0.0.0/0 --nat-gateway-id $NAT_GW_ID --query 'Return' --output text
echo Associate private subnet AZ1 with private route table
aws ec2 associate-route-table --subnet-id $PRIVATE_SUBNET1_ID --route-table-id $PRIVATE_RT_ID --query 'Return' --output text
aws ec2 associate-route-table --subnet-id $PRIVATE_SUBNET2_ID --route-table-id $PRIVATE_RT_ID --query 'Return' --output text
- Security and role configuration: Establish security groups for your ECS instances and define IAM roles necessary for ECS operations.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
echo Create Security Group for the EC2 Container Instances
EC2_SG_ID=$(aws ec2 create-security-group --group-name "$LAB_NAME-ec2-sg" --description "Security group for ECS Container Instances (EC2)" --vpc-id $VPC_ID --query 'GroupId' --output text)
echo Add inbound rules for SSH 22 from VPC CIDR
aws ec2 authorize-security-group-ingress --group-id $EC2_SG_ID --protocol tcp --port 22 --cidr $VPC_CIDR --query 'Return' --output text
echo Add inbound rules for all traffic coming from the same SG
aws ec2 authorize-security-group-ingress --group-id $EC2_SG_ID --protocol -1 --port all --source-group $EC2_SG_ID --query 'Return' --output text
echo Create the IAM Role to be used with the ECS Container Instance EC2
EC2_ROLE_NAME=$LAB_NAME-ec2-role
aws iam create-role --role-name $EC2_ROLE_NAME --query 'Role.Arn' --output text --assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}'
echo Attach necessary managed policies required for ECS
aws iam attach-role-policy --role-name $EC2_ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role
aws iam attach-role-policy --role-name $EC2_ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
aws iam attach-role-policy --role-name $EC2_ROLE_NAME --policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
echo Create the instance profile for the ECS Container Instances EC2
INSTANCE_PROFILE_NAME=$LAB_NAME-ec2-instance-profile
aws iam create-instance-profile --instance-profile-name $INSTANCE_PROFILE_NAME
echo Add the role to the instance profile
aws iam add-role-to-instance-profile --instance-profile-name $INSTANCE_PROFILE_NAME --role-name $EC2_ROLE_NAME
- Create the ECS cluster: Creating a resilient and scalable Amazon ECS cluster across multiple logical AWS Outposts involves several steps.
1
2
3
echo Create the ECS cluster with CloudWatch Container Insights enabled
ECS_CLUSTER_NAME=$LAB_NAME-cluster
ECS_CLUSTER_ARN=$(aws ecs create-cluster --cluster-name $ECS_CLUSTER_NAME --settings "name=containerInsights,value=enabled" --tags $ECS_TAGS --query 'cluster.clusterArn' --output text)
- Create an EC2 Launch Template: The launch template defines the configuration of EC2 instances that will run your ECS tasks. It includes specifications for the instance type, IAM roles, security groups, and user data that configures the instance to join your ECS cluster.
Please note that if your Outposts are configured differently or slotted in unique configurations, you will need separate launch templates for each. These individual launch templates should be specifically tailored and assigned to the auto scaling groups and capacity providers corresponding to the respective Outposts in the subsequent steps of your setup. This ensures that each Outpost is accurately provisioned with the correct settings and resources, aligning with its specific requirements.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
echo Retrieve the AMI ID for the ECS Optimized image on Amazon Linux 2023
OPTIMIZED_ECS_AMI_ID=$(aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2023/recommended/image_id --query 'Parameters[0].Value' --output text)
echo Create EC2 Launch Teamplate to be used with the ECS Container Instance EC2 Capacity Provider
cat <<EOF > user-data.txt
#!/bin/bash
echo ECS_CLUSTER=$ECS_CLUSTER_NAME >> /etc/ecs/ecs.config
ECS_CONTAINER_INSTANCE_PROPAGATE_TAGS_FROM=ec2_instance
EOF
LAUNCH_TEMPLATE_ID=$(aws ec2 create-launch-template \
--launch-template-name "$LAB_NAME-ec2-launch-template" \
--version-description "ECS EC2 Capacity Provider with AL2023" \
--launch-template-data '{
"ImageId": "'$OPTIMIZED_ECS_AMI_ID'",
"InstanceType": "m5.xlarge",
"IamInstanceProfile": {
"Name": "'$INSTANCE_PROFILE_NAME'"
},
"NetworkInterfaces": [
{
"DeviceIndex": 0,
"Groups": ["'$EC2_SG_ID'"],
"DeleteOnTermination": true
}
],
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"VolumeSize": 30,
"VolumeType": "gp2",
"DeleteOnTermination": true
}
}
],
"UserData": "'$(cat user-data.txt | base64)'"
}' \
--query 'LaunchTemplate.LaunchTemplateId' \
--output text)
sleep 5
- Create Auto Scaling Groups: Configure Auto Scaling Groups (ASG) to automatically manage the scaling of your EC2 instances across different subnets on Outposts. This setup enhances high availability and fault tolerance.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
echo Create the EC2 ASG using the Launch Template in the Private Subnet 1
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name $LAB_NAME-asg-az1 \
--launch-template LaunchTemplateId=$LAUNCH_TEMPLATE_ID,Version='$Latest' \
--min-size 1 \
--max-size 2 \
--desired-capacity 1 \
--vpc-zone-identifier "$PRIVATE_SUBNET1_ID" \
--new-instances-protected-from-scale-in \
--tags $AS_TAGS Key=OutpostAZ,Value=1 Key=Name,Value=$LAB_NAME-container-instance-az1
ASG_AZ1_ARN=$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $LAB_NAME-asg-az1 --query 'AutoScalingGroups[0].AutoScalingGroupARN' --output text)
echo Create the EC2 ASG using the Launch Template in the Private Subnet 2
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name $LAB_NAME-asg-az2 \
--launch-template LaunchTemplateId=$LAUNCH_TEMPLATE_ID,Version='$Latest' \
--min-size 1 \
--max-size 2 \
--desired-capacity 1 \
--vpc-zone-identifier "$PRIVATE_SUBNET2_ID" \
--new-instances-protected-from-scale-in \
--tags $AS_TAGS Key=OutpostAZ,Value=2 Key=Name,Value=$LAB_NAME-container-instance-az2
ASG_AZ2_ARN=$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $LAB_NAME-asg-az2 --query 'AutoScalingGroups[0].AutoScalingGroupARN' --output text)
- Create Capacity providers(CPs): CPs are essential for linking your auto-scaling settings to the ECS cluster, enabling ECS to manage the scaling of tasks based on demand. As well as gives you the capability to choose where to deploy your tasks.
This step is crucial as it involves defining two capacity providers, one for each logical Outpost. Here, you will assign a weight to each capacity provider to determine their contribution to the overall task placement strategy. This configuration ensures that tasks are distributed according to the defined weights, optimizing resource utilization across both Outposts.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
echo Create the ECS Capacity Provider using the EC2 ASG AZ1
echo ECS_CP_AZ1_NAME=$(aws ecs describe-capacity-providers --capacity-providers $LAB_NAME-ec2-cp-az1 --region $AWS_REGION --query 'capacityProviders[0].capacityProviderArn' --output text 2>/dev/null)
#ECS_CP_AZ1_NAME=$LAB_NAME-ec2-cp-az1
ECS_CP_AZ1_NAME=$(aws ecs create-capacity-provider \
--name $LAB_NAME-ec2-cp-az1 \
--auto-scaling-group-provider autoScalingGroupArn="$ASG_AZ1_ARN",managedScaling='{status="ENABLED",targetCapacity=100,minimumScalingStepSize=1,maximumScalingStepSize=100}',managedTerminationProtection="ENABLED" \
--region $AWS_REGION \
--query 'capacityProvider.name' --output text)
echo Create the ECS Capacity Provider using the EC2 ASG AZ2
#ECS_CP_AZ2_NAME=$LAB_NAME-ec2-cp-az2
ECS_CP_AZ2_NAME=$(aws ecs create-capacity-provider \
--name $LAB_NAME-ec2-cp-az2 \
--auto-scaling-group-provider autoScalingGroupArn="$ASG_AZ2_ARN",managedScaling='{status="ENABLED",targetCapacity=100,minimumScalingStepSize=1,maximumScalingStepSize=100}',managedTerminationProtection="ENABLED" \
--region $AWS_REGION \
--query 'capacityProvider.name' --output text)
echo Associate the ECS Capacity Provider with the cluster
aws ecs put-cluster-capacity-providers \
--cluster $ECS_CLUSTER_NAME \
--capacity-providers $ECS_CP_AZ1_NAME $ECS_CP_AZ2_NAME \
--default-capacity-provider-strategy capacityProvider=$ECS_CP_AZ1_NAME,weight=1 capacityProvider=$ECS_CP_AZ2_NAME,weight=1\
--region $AWS_REGION \
--query 'Return' --output text
- Sample task for testing: This task definition is structured to ensure optimal performance and reliability for NGINX running within our ECS cluster on AWS Outposts.
For testing purposes, we deployed the same tasks to both Outposts racks. However, you can tailor the deployment by assigning tasks separately to each Outpost, leveraging the appropriate subnet. This flexibility allows you to optimize resource usage and task distribution based on the specific needs and capabilities of each Outpost.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
echo Create the ECS Task Definition for the NGINX
NGINX_TASK_DEF_NAME=$LAB_NAME-nginx-sample
cat <<EOF > nginx-task-definition.json
{
"family": "$NGINX_TASK_DEF_NAME",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "nginx",
"image": "public.ecr.aws/nginx/nginx:latest",
"memory": 512,
"cpu": 256,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"healthCheck": {
"command": [
"CMD-SHELL",
"curl -f http://localhost/ || exit 1"
],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/$NGINX_TASK_DEF_NAME",
"awslogs-region": "$AWS_REGION",
"awslogs-stream-prefix": "nginx"
}
}
}
],
"requiresCompatibilities": [
"EC2"
],
"cpu": "256",
"memory": "512",
"executionRoleArn": "$NGINX_TASK_EXEC_ROLE_ARN",
"taskRoleArn": "$NGINX_TASK_ROLE_ARN"
}
EOF
NGINX_TASK_DEF_ARN=$(aws ecs register-task-definition --query 'taskDefinition.AtaskDefinitionArn' --output text --cli-input-json file://nginx-task-definition.json)
echo Create the CloudWatch Log Group for the Task Definition
aws logs create-log-group --log-group-name /ecs/$NGINX_TASK_DEF_NAME
aws logs put-retention-policy --log-group-name /ecs/$NGINX_TASK_DEF_NAME --retention-in-days 7
echo Create the ECS Service with the NGINX Task Definition
aws ecs create-service \
--cluster $ECS_CLUSTER_NAME \
--service-name nginx-sample \
--task-definition $NGINX_TASK_DEF_NAME \
--desired-count 2 \
--launch-type EC2 \
--network-configuration "awsvpcConfiguration={subnets=[$PRIVATE_SUBNET1_ID,$PRIVATE_SUBNET2_ID],securityGroups=[$NGINX_TASK_SG_ID]}" \
--query 'service.serviceArn' --output text
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.