ECS vs EKS: Which Container Platform for Your Workload?
Compare AWS ECS and EKS for container orchestration. Learn when to choose AWS-native simplicity vs Kubernetes flexibility, with real infrastructure examples.
AWS offers two container orchestration platforms: ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). ECS is AWS-native and simpler. EKS runs standard Kubernetes. The right choice depends on your team, workload, and multi-cloud ambitions.
Quick Comparison
| Aspect | ECS | EKS |
|---|---|---|
| Learning Curve | Low (AWS-native concepts) | Steep (Kubernetes ecosystem) |
| Portability | AWS-only | Multi-cloud, on-prem |
| Control Plane Cost | Free | $0.10/hour (~$73/month) |
| Ecosystem | AWS services | Massive K8s ecosystem |
| Operational Overhead | Lower | Higher |
| Service Mesh | App Mesh (limited) | Istio, Linkerd, Cilium |
| Auto-scaling | Application Auto Scaling | HPA, VPA, KEDA, Karpenter |
ECS: AWS-Native Simplicity
# ECS Cluster with Fargate
resource "aws_ecs_cluster" "main" {
name = "production"
setting {
name = "containerInsights"
value = "enabled"
}
configuration {
execute_command_configuration {
logging = "OVERRIDE"
log_configuration {
cloud_watch_log_group_name = aws_cloudwatch_log_group.ecs_exec.name
}
}
}
}
resource "aws_ecs_cluster_capacity_providers" "main" {
cluster_name = aws_ecs_cluster.main.name
capacity_providers = ["FARGATE", "FARGATE_SPOT"]
default_capacity_provider_strategy {
base = 1
weight = 1
capacity_provider = "FARGATE"
}
default_capacity_provider_strategy {
weight = 4
capacity_provider = "FARGATE_SPOT"
}
}
ECS Task Definition
resource "aws_ecs_task_definition" "app" {
family = "app"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 256
memory = 512
execution_role_arn = aws_iam_role.ecs_execution.arn
task_role_arn = aws_iam_role.ecs_task.arn
container_definitions = jsonencode([
{
name = "app"
image = "${aws_ecr_repository.app.repository_url}:latest"
portMappings = [
{
containerPort = 8080
protocol = "tcp"
}
]
environment = [
{ name = "ENV", value = var.environment }
]
secrets = [
{
name = "DATABASE_URL"
valueFrom = aws_secretsmanager_secret.db.arn
}
]
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = aws_cloudwatch_log_group.app.name
awslogs-region = var.region
awslogs-stream-prefix = "app"
}
}
healthCheck = {
command = ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"]
interval = 30
timeout = 5
retries = 3
startPeriod = 60
}
}
])
}
resource "aws_ecs_service" "app" {
name = "app"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.app.arn
desired_count = 3
capacity_provider_strategy {
capacity_provider = "FARGATE"
weight = 1
base = 1
}
capacity_provider_strategy {
capacity_provider = "FARGATE_SPOT"
weight = 4
}
network_configuration {
subnets = var.private_subnet_ids
security_groups = [aws_security_group.app.id]
}
load_balancer {
target_group_arn = aws_lb_target_group.app.arn
container_name = "app"
container_port = 8080
}
deployment_circuit_breaker {
enable = true
rollback = true
}
enable_execute_command = true
}
EKS: Kubernetes Power
# EKS Cluster
resource "aws_eks_cluster" "main" {
name = "production"
role_arn = aws_iam_role.eks_cluster.arn
version = "1.29"
vpc_config {
subnet_ids = var.private_subnet_ids
endpoint_private_access = true
endpoint_public_access = true
security_group_ids = [aws_security_group.eks_cluster.id]
}
enabled_cluster_log_types = [
"api", "audit", "authenticator", "controllerManager", "scheduler"
]
encryption_config {
provider {
key_arn = aws_kms_key.eks.arn
}
resources = ["secrets"]
}
}
# Managed Node Group
resource "aws_eks_node_group" "main" {
cluster_name = aws_eks_cluster.main.name
node_group_name = "main"
node_role_arn = aws_iam_role.eks_node.arn
subnet_ids = var.private_subnet_ids
scaling_config {
desired_size = 3
min_size = 2
max_size = 10
}
instance_types = ["t3.medium", "t3a.medium"]
capacity_type = "SPOT"
update_config {
max_unavailable_percentage = 33
}
}
# EKS Add-ons
resource "aws_eks_addon" "vpc_cni" {
cluster_name = aws_eks_cluster.main.name
addon_name = "vpc-cni"
}
resource "aws_eks_addon" "coredns" {
cluster_name = aws_eks_cluster.main.name
addon_name = "coredns"
}
resource "aws_eks_addon" "kube_proxy" {
cluster_name = aws_eks_cluster.main.name
addon_name = "kube-proxy"
}
Kubernetes Deployment
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
namespace: production
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
serviceAccountName: app
containers:
- name: app
image: 123456789.dkr.ecr.us-east-1.amazonaws.com/app:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
env:
- name: ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: app
---
apiVersion: v1
kind: Service
metadata:
name: app
namespace: production
spec:
selector:
app: app
ports:
- port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
namespace: production
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app
port:
number: 80
Auto-Scaling Comparison
ECS Auto Scaling
resource "aws_appautoscaling_target" "ecs" {
max_capacity = 10
min_capacity = 2
resource_id = "service/${aws_ecs_cluster.main.name}/${aws_ecs_service.app.name}"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
}
resource "aws_appautoscaling_policy" "ecs_cpu" {
name = "cpu-scaling"
policy_type = "TargetTrackingScaling"
resource_id = aws_appautoscaling_target.ecs.resource_id
scalable_dimension = aws_appautoscaling_target.ecs.scalable_dimension
service_namespace = aws_appautoscaling_target.ecs.service_namespace
target_tracking_scaling_policy_configuration {
target_value = 70
scale_in_cooldown = 300
scale_out_cooldown = 60
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageCPUUtilization"
}
}
}
EKS with HPA + Karpenter
# Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
# Karpenter NodePool for node auto-scaling
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: karpenter.sh/capacity-type
operator: In
values: ["spot", "on-demand"]
- key: node.kubernetes.io/instance-type
operator: In
values: ["t3.medium", "t3.large", "t3a.medium", "t3a.large"]
nodeClassRef:
name: default
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenUnderutilized
consolidateAfter: 30s
When to Choose ECS
Choose ECS when:
- Team is new to containers
- You’re AWS-only (no multi-cloud plans)
- You want minimal operational overhead
- Your workloads are straightforward web apps
- You want to avoid Kubernetes complexity
# ECS is simpler to debug
aws ecs execute-command \
--cluster production \
--task abc123 \
--container app \
--command "/bin/sh" \
--interactive
When to Choose EKS
Choose EKS when:
- Team has Kubernetes experience
- You need multi-cloud or hybrid deployments
- You need the Kubernetes ecosystem (Istio, ArgoCD, Prometheus)
- Complex microservices with service mesh requirements
- You want portability for potential migration
# Kubernetes ecosystem gives you more tools
kubectl get pods -n production
kubectl logs -f deployment/app
kubectl exec -it deployment/app -- /bin/sh
kubectl rollout undo deployment/app
Cost Comparison
For a typical 3-service application:
| Item | ECS Fargate | EKS with EC2 |
|---|---|---|
| Control Plane | $0 | $73/month |
| Compute (3x1vCPU/2GB) | ~$90/month | ~$60/month (Spot) |
| Load Balancer | ~$20/month | ~$20/month |
| Monitoring | CloudWatch | Prometheus (self-hosted) |
| Total | ~$110/month | ~$153/month |
At scale, EKS often becomes cheaper due to better bin-packing and Spot flexibility.
Key Takeaways
- ECS for simplicity — less to learn, fewer moving parts, integrated with AWS
- EKS for ecosystem — service meshes, GitOps, advanced scheduling, portability
- Fargate eliminates node management for both platforms
- Start with ECS if you’re unsure — migration to EKS is possible later
- EKS control plane cost is fixed — less relevant at scale
- Karpenter makes EKS node scaling as easy as Fargate
“ECS is the right answer 80% of the time. EKS is the right answer when you know you need it.”