Part 3 A step-by-step guide for AWS EC2 provisioning using Terraform: Cloud Cost Optimization, AWS EC2 Spot Instances with CloudWatch, SNS and Lambda

Part 3 : Optimizing cloud costs is crucial for businesses leveraging cloud services. AWS EC2 Spot Instances provide a cost-effective way to run applications, taking advantage of unused EC2 capacity at significantly lower prices. Here’s a guide on how to use Terraform for cloud cost optimization using AWS EC2 Spot Instances

Published Jun 20, 2024
Last Modified Jun 26, 2024

How to request a spot instance in AWS for EC2?

Here’s how:
  • Log in to the AWS Management Console and navigate to the EC2 service.
  • Click on “Spot Requests” in the left-hand menu.
  • Click on “Request Spot Instances” to launch the wizard.
  • Choose the Machine Image (AMI) and instance type for your Spot Instance.
  • Enter the number of Spot Instances you want to request (AWS will try to launch this many instances based on your bid price).
  • Set a maximum price you’re willing to pay per hour for the instance. This is crucial as Spot prices fluctuate. Check the historical pricing information to set a competitive bid.
  • Choose whether you want a one-time request or a persistent request. Select “persistent” for this scenario.
  • Configure options like security groups, storage, and IAM roles.
  • Review and launch the request.
Step 1
Create Spot Fleet Request
Step 2Step 3
Specify Instance network
Step 4
Select Instance type
Step 5
Lunch
Spot Instance request via AWS console management
Spot Instance Successfully filed and approved
prodxcloud using spot instances to save cost

How to Create an EC2 Spot Instance with Terraform?

Here’s a concise summary of the Terraform configuration for creating an AWS EC2 Spot Instance:
# Create an EC2 Spot Instance
resource "aws_instance" "spot_instance" {
ami = "ami-0eaf7c3456e7b5b68" # Example AMI ID, replace with your AMI
instance_type = "t2.micro"
key_name = var.instance_keyName
security_groups = ["${aws_security_group.prodxcloud-SG.id}", "${aws_security_group.prodxcloud-SG.id}"]
subnet_id = var.instance_subnet_id
tags = {
Name = "SpotInstanceTF"
}
}
This block defines the EC2 instance resource, specifying it as a Spot Instance.
  • Ami: Specifies the Amazon Machine Image (AMI) ID to use for the instance. AMIs are pre-configured templates for your instances. Replace this with your desired AMI ID.
  • instance_type: Specifies the instance type, in this case, t2.micro, which is a small, general-purpose instance.
  • key_name: The name of the key pair to use for SSH access to the instance. Replace your_key_name with the name of your key pair.
  • security_group_ids: An array of security group IDs to associate with the instance. Here, it references the security group created earlier using aws_security_group.example.id.
Steps to Deploy
  1. Initialize Terraform
terraform init
2. Plan the Deployment
terraform plan
3. Apply the Configuration
terraform apply
Terraform Spot Instance

Sentinel cost compliance — tagging

Tagging enables you to group, analyze, and create more granular policies around infrastructure instances. (See how tagging helps our own teams at HashiCorp cull orphaned cloud instances)
Sentinel can enforce tagging at the provisioning phase and during updates to ensure that optimization can be targeted and governed. Tagging is managed in a simple key-value format and can be enforced across all CSPs. Here is a full sample policy for enforcement on AWS.
### List of mandatory tags ###
mandatory_tags = [
"Name",
"ttl",
"owner"
"cost center
"appid",
]
policy "enforce-mandatory-tags" {
enforcement_level ="hard-mandatory"

CloudWatch monitoring and cost monitoring of your Terraform configuration for an EC2 instance

  • Create CloudWatch Alarms: Set up CloudWatch alarms to monitor the instance’s metrics such as CPU utilization.
  • Enable Detailed Monitoring: Enable detailed monitoring on your EC2 instance.
  • Set Up AWS Cost and Usage Reports: Create an S3 bucket to store the cost and usage reports and set up the necessary configurations.
# CloudWatch Alarm for CPU Utilization
resource "aws_cloudwatch_metric_alarm" "cpu_alarm" {
alarm_name = "cpu-utilization-alarm"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "CPUUtilization"
namespace = "AWS/EC2"
period = "120"
statistic = "Average"
threshold = "80"
alarm_description = "This alarm triggers when CPU utilization exceeds 80%"
actions_enabled = true
alarm_actions = [aws_sns_topic.alerts.arn]
ok_actions = [aws_sns_topic.alerts.arn]
dimensions = {
InstanceId = aws_instance.prodxcloud-lab-1.id
}
}
# SNS Topic for CloudWatch Alarms
resource "aws_sns_topic" "alerts" {
name = "cloudwatch-alerts"
}
# Subscribe an email to the SNS Topic
resource "aws_sns_topic_subscription" "email_subscription" {
topic_arn = aws_sns_topic.alerts.arn
protocol = "email"
endpoint = "joelwembo@outlook.com" # Replace with your email
}
# Create an S3 bucket for cost and usage reports
resource "aws_s3_bucket" "cost_usage_reports" {
bucket = "my-cost-usage-reports-bucket" # Replace with your bucket name
}
# Cost and Usage Report Definition
resource "aws_cur_report_definition" "example" {
report_name = "example-report"
time_unit = "HOURLY"
format = "textORcsv"
compression = "GZIP"
additional_schema_elements = ["RESOURCES"]
s3_bucket = aws_s3_bucket.cost_usage_reports.bucket
s3_region = var.aws_region
s3_prefix = "cost-reports/"
report_versioning = "OVERWRITE_REPORT"
refresh_closed_reports = true
# report_name_prefix = "example-report"
}
Apply current solution,
SNS Topic created to receive pricing / CPU consumption alerts

Add Lambda functions that start and stop EC2 instances using CloudWatch

Here is the solution for setting up an EC2 instance, enabling CloudWatch monitoring, creating Lambda functions to start and stop the instance, and setting up cost monitoring with AWS Cost and Usage Reports.
start-instance.py
stop-instance.py
cost_monitoring.py
Edit the main. tf that we created in Part 1 of the tutorial by enabling the monitoring feature
Next, create a separate file LambdaCloudWatchEC2.tf for cloud watch price and CPU monitoring
Terraform apply again ,
Check your AWS management console
Lambda Functions
Edit the following SNS Topic ARN
Check your email for validation and confirmation
Update: Once you are done with this tutorial, you might to check a follow-up tutorial on the next part, A step-by-step guide for AWS EC2 provisioning using Terraform: Azure VM and Networking (multi-cloud preparations) — Part 4

Conclusion

In conclusion, while AWS doesn’t directly offer manual launching of individual Spot Instances, you can achieve similar outcomes using Persistent Spot Requests or Auto Scaling with Spot Instances. Persistent Spot Requests allow you to continuously try launching instances at your bid price, providing flexibility to manage launched instances individually. Auto Scaling with Spot Instances offers more automation, allowing you to configure an Auto Scaling group to launch and terminate Spot Instances based on your defined policies.
Terraform, a popular infrastructure as code (IaC) tool, can be leveraged to automate the provisioning and management of Spot Instances within AWS. By incorporating Terraform configurations, you can streamline the creation of Spot Instance requests, launch configurations within Auto Scaling groups, and manage scaling policies. This not only ensures infrastructure consistency but also simplifies managing Spot Instances at scale.
To enhance readability, this handbook is divided into chapters and split into parts. The first, part, “A step-by-step guide for AWS EC2 provisioning using Terraform: HA, ALB, VPC, and Route53 — Part 1”, and the second part “A step-by-step guide for AWS EC2 provisioning using Terraform: HA, CloudFront, WAF, and SSL Certificate — Part 2”, and “A step-by-step guide for AWS EC2 provisioning using Terraform: Cloud Cost Optimization, AWS EC2 Spot Instances — Part 3”, was covered in a separate article to keep the reading time manageable and ensure focused content. The next part or chapter will be published in the next post, upcoming in a few days, “A step-by-step guide for AWS EC2 provisioning using Terraform: VPC peering, VPN, Site-to-site Connection, tunnels ( multi-Cloud ) — Part 9“ and so much more !!
Thank you for Reading !! 🙌🏻, don’t forget to subscribe and give it a CLAP 👏, and if you found this article useful contact me or feel free to sponsor me to produce more public content. see me in the next article.🤘

About me

I am Joel Wembo, AWS certified cloud Solutions architect, Back-end developer, and AWS Community Builder, I‘m based in the Philippines 🇵🇭; and currently working at prodxcloud as a DevOps & Cloud Architect. I bring a powerful combination of expertise in cloud architecture, DevOps practices, and a deep understanding of high availability (HA) principles. I leverage my knowledge to create robust, scalable cloud applications using open-source tools for efficient enterprise deployments.
I’m looking to collaborate on AWS CDK, AWS SAM, DevOps CI/CD, Serverless Framework, CloudFormation, Terraform, Kubernetes, TypeScript, GitHub Actions, PostgreSQL, and Django.”
For more information about the author ( Joel O. Wembo ) visit:
Links:
🚀 A step-by-step guide for AWS EC2 provisioning using Terraform: How to set up SSM ( AWS Systems Manager ) for EC2? — Part 14
and Much More …
 

Comments