AWS Logo
Menu
How We Cut AWS Costs by 70% with ECS Over EKS

How We Cut AWS Costs by 70% with ECS Over EKS

Discover how AWS ECS over EKS cut our EC2 costs by 70% in a real-world case study on secure, cost-effective container deployments in AWS.

Published May 21, 2025

Introduction: Understanding the Real Cost of Cloud Orchestration

Cloud-native architectures promise agility, scalability, and efficiency. But as many teams quickly discover, choosing the wrong orchestration tool can silently inflate your monthly cloud bill. With AWS offering multiple options like ECS, EKS, and EC2, how do you decide what’s best for your application?
In this blog post, we’ll walk you through our journey of migrating from a nine-EC2 instance deployment to Amazon ECS, and why we intentionally chose ECS over EKS, despite having Kubernetes expertise. The result? A massive cost reduction from $1.3K to $300/month—without sacrificing performance or security.

Our Setup: 9 Applications, 5 Developers, and a Mission to Optimize

Docker Compose, Nginx, and EC2: The Original Architecture

Our team of five full-stack developers was managing nine containerized applications. Each app had:
  • A Docker container for the application
  • An Nginx container to handle incoming requests
  • Separate EC2 Ubuntu instances for each app
  • Individual load balancers pointing to port 80 of each instance
This setup, while functional, lacked resource efficiency and was cost-heavy due to the overhead of maintaining nine EC2 instances.

Why Security Was at the Core of Our Deployment Design

Security drove many of our architectural choices. We exposed only port 80 publicly and used Nginx to proxy traffic to the internal app port (8080). Each app was siloed in its own EC2 instance, reducing lateral movement risks and ensuring clear network boundaries.

Phase 1: Identifying the Bottlenecks in Our EC2-Only Setup

Over-provisioning & Under-utilization

Nine EC2 instances meant lots of idle resources. While one app may have been spiking in CPU, another would sit nearly idle. Yet, we paid for all of it, all the time.

Nine Load Balancers – Too Much of a Good Thing

Each app had its dedicated load balancer. Although this worked fine, it introduced redundant infrastructure and unnecessary costs, especially when most apps had modest, predictable traffic.

Phase 2: Transition to Amazon ECS for Maximum Cost Efficiency

Why We Chose ECS over EKS – A Practical Decision

While we’re comfortable with Kubernetes and EKS, we recognized that it would:
  • Add operational complexity
  • Introduce a fixed monthly control plane cost ($72+)
  • Require Helm charts, ingress controllers, HPA setups, and more
ECS, in contrast, was native, simple, and effective.
We didn’t need multi-cloud orchestration or complex service meshes. ECS gave us the flexibility we needed without the Kubernetes overhead.

ECS Architecture: Services, Tasks, and Load Balancer Mapping

Here’s how we restructured:
  • Two ECS clusters (one for backend, one for support services)
  • Seven services, each running one task (max 3 tasks)
  • Load balancers reassigned to ECS target groups on port 8080
  • Tasks routed internally from the load balancer’s port 80
We retained our original Docker images, making the migration smooth.

Security Re-Architecture: Preserving Our Zero-Trust Principle

Port Mapping Strategy: 8080 (ECS Task) → 80 (ALB Listener) via Target Groups

In ECS, each task ran on port 8080, but the load balancer mapped it to port 80. This preserved the external exposure rules we had in EC2 while taking full advantage of ECS internal networking.

Security Groups and ECS Task-Level Access Control

We retained strict security group rules: only port 80 is open to the outside world. ECS tasks were given task roles for fine-grained access to AWS resources, further strengthening our security posture.

Cost Optimization: The Results Speak Volumes

EC2 Costs Before vs After ECS Migration

MetricBefore (EC2 Only)After (ECS + 2 EC2)
Monthly EC2 Cost~$1,300~$300
Load Balancers92 (reused 7)
Containers Managed97 (2 frontend still on EC2)
Savings~$1,000/month

Load Balancer Reuse & Task Scaling Strategy

We kept our existing ALBs and reassigned them to ECS target groups. This eliminated setup time and maintained routing consistency. For apps that needed more throughput (e.g., during event registrations), we allowed up to 3 tasks per service, providing just enough elasticity.

ECS vs EKS: Head-to-Head Comparison Based on Our Use Case

CriteriaECSEKS
Operational OverheadMinimalConsiderable
Monthly Base Cost$0$72+
Security SimplicityStraightforwardComplex (RBAC, policies, PSP)
ScalingManual/AutoAuto (HPA, cluster autoscaler)
Use Case Fit✅ Perfect for small-to-medium predictable workloads❌ Overkill for stable traffic patterns
Cost TransparencyClear & granularAbstracted across cluster/pods

When Should You Choose ECS Over EKS?

Perfect Use Cases for ECS

  • Small teams managing multiple containerized apps
  • Predictable or moderately spiky workloads
  • Need for tight AWS integration
  • Cost-sensitive environment

When EKS Might Make Sense Instead

  • Multi-cloud/hybrid cloud architecture
  • Advanced CI/CD, custom controllers, or operators
  • Stateful apps with CSI drivers or custom networking
  • You already run Kubernetes on-prem or with another cloud provider

Conclusion: How We Future-Proofed Our Infrastructure Without Kubernetes

Our story proves that you don’t need Kubernetes to build secure, scalable, production-grade infrastructure on AWS. With thoughtful architecture, ECS enabled us to:
  • Cut costs by 70%
  • Maintain strong security practices
  • Scale on demand
  • Simplify deployment workflows for our development team
ECS gave us the cloud-native benefits without the Kubernetes complexity — and that made all the difference.

🔑 Key Takeaways for Cost-Conscious Teams Choosing Between ECS and EKS

  • ECS is a practical fit for small-to-mid teams that need container orchestration without the complexity of Kubernetes. It lets developers focus on shipping code instead of managing clusters.
  • Security and isolation are not compromised in ECS. With task-level IAM roles, VPC networking, and strict security groups, ECS supports secure production workloads just as well as EKS.
  • Microservices thrive on ECS. You don’t need CRDs, Ingress controllers, or service meshes to run scalable services. ECS task definitions, ALB routing, and service discovery do the job well.
  • Auto scaling works effectively with ECS, especially for predictable traffic patterns and event-driven spikes.
  • Cost savings are real. We reduced our EC2 spend by over 70% by consolidating infrastructure and switching to ECS, with zero compromise on stability or security.
  • EKS has its place — particularly for organizations with Kubernetes-first stacks, hybrid/multi-cloud strategies, or complex orchestration needs.
For many startups, mid-sized teams, or projects that prioritize agility and budget control, ECS delivers unmatched simplicity, performance, and cost-efficiency.
Scaling connections like AWS—let’s connect on LinkedIn! 🔗☁️
 

Comments