AKS vs GKE vs EKS: Managed Kubernetes Compared
Compare Azure AKS, Google GKE, and AWS EKS for managed Kubernetes. Learn the differences in pricing, features, and operational experience across clouds.
Every major cloud offers managed Kubernetes, but they’re not created equal. GKE pioneered it, EKS has the AWS ecosystem, and AKS offers strong Azure integration. This guide compares all three to help you choose — or understand what you’re working with.
Quick Comparison
| Feature | EKS (AWS) | GKE (Google) | AKS (Azure) |
|---|---|---|---|
| Control plane cost | $0.10/hr ($73/mo) | Free (Autopilot: pay per pod) | Free |
| K8s version lag | ~2-3 months | Latest first | ~1-2 months |
| Autopilot/Serverless | Fargate | Autopilot | Virtual Nodes |
| Node auto-scaling | Karpenter/CA | Autopilot/CA | CA + KEDA |
| GPU support | ✅ | ✅ | ✅ |
| Windows nodes | ✅ | ✅ | ✅ (best) |
| Service mesh | App Mesh | Anthos SM/Istio | Open Service Mesh |
| GitOps | Flux (add-on) | Config Sync | Flux/GitOps |
| Multi-cluster | Limited | Anthos | Azure Arc |
EKS: The AWS Way
Cluster Creation
# EKS cluster with managed node groups
resource "aws_eks_cluster" "main" {
name = "production"
role_arn = aws_iam_role.cluster.arn
version = "1.29"
vpc_config {
subnet_ids = var.private_subnet_ids
endpoint_private_access = true
endpoint_public_access = true
security_group_ids = [aws_security_group.cluster.id]
}
enabled_cluster_log_types = ["api", "audit", "authenticator"]
encryption_config {
provider {
key_arn = aws_kms_key.eks.arn
}
resources = ["secrets"]
}
}
resource "aws_eks_node_group" "main" {
cluster_name = aws_eks_cluster.main.name
node_group_name = "main"
node_role_arn = aws_iam_role.node.arn
subnet_ids = var.private_subnet_ids
scaling_config {
desired_size = 3
min_size = 2
max_size = 10
}
instance_types = ["t3.large"]
capacity_type = "SPOT"
update_config {
max_unavailable_percentage = 33
}
}
# Essential add-ons
resource "aws_eks_addon" "vpc_cni" {
cluster_name = aws_eks_cluster.main.name
addon_name = "vpc-cni"
}
resource "aws_eks_addon" "coredns" {
cluster_name = aws_eks_cluster.main.name
addon_name = "coredns"
}
resource "aws_eks_addon" "kube_proxy" {
cluster_name = aws_eks_cluster.main.name
addon_name = "kube-proxy"
}
EKS Pros & Cons
Pros:
- Deep AWS service integration (IAM, ALB, EBS, EFS)
- Karpenter for intelligent node provisioning
- Fargate for serverless pods
- Mature ecosystem
Cons:
- Control plane costs ($73/month)
- More setup required (VPC-CNI, add-ons)
- Slower K8s version adoption
- IRSA (IAM Roles for Service Accounts) is complex
GKE: The Kubernetes Native
Cluster Creation
# GKE Autopilot (fully managed)
resource "google_container_cluster" "autopilot" {
name = "production"
location = "us-central1"
enable_autopilot = true
network = google_compute_network.main.id
subnetwork = google_compute_subnetwork.gke.id
ip_allocation_policy {
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}
release_channel {
channel = "REGULAR"
}
workload_identity_config {
workload_pool = "${var.project_id}.svc.id.goog"
}
}
# GKE Standard (more control)
resource "google_container_cluster" "standard" {
name = "production-standard"
location = "us-central1"
# We manage node pools separately
remove_default_node_pool = true
initial_node_count = 1
network = google_compute_network.main.id
subnetwork = google_compute_subnetwork.gke.id
workload_identity_config {
workload_pool = "${var.project_id}.svc.id.goog"
}
cluster_autoscaling {
enabled = true
resource_limits {
resource_type = "cpu"
minimum = 4
maximum = 100
}
resource_limits {
resource_type = "memory"
minimum = 16
maximum = 400
}
auto_provisioning_defaults {
oauth_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
}
}
}
resource "google_container_node_pool" "main" {
name = "main"
cluster = google_container_cluster.standard.id
location = "us-central1"
autoscaling {
min_node_count = 2
max_node_count = 20
}
node_config {
machine_type = "e2-standard-4"
spot = true
oauth_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
workload_metadata_config {
mode = "GKE_METADATA"
}
}
}
GKE Pros & Cons
Pros:
- Free control plane (Standard mode)
- Autopilot is truly serverless Kubernetes
- Latest Kubernetes versions first
- Best-in-class cluster management UI
- Workload Identity is simpler than IRSA
Cons:
- Autopilot has restrictions (no privileged, limited DaemonSets)
- Less ecosystem compared to AWS
- GCP learning curve if coming from AWS
AKS: The Azure Integrated
Cluster Creation
resource "azurerm_kubernetes_cluster" "main" {
name = "production"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
dns_prefix = "production"
kubernetes_version = "1.29"
default_node_pool {
name = "default"
vm_size = "Standard_D4s_v3"
min_count = 2
max_count = 10
enable_auto_scaling = true
vnet_subnet_id = azurerm_subnet.aks.id
# Spot instances
priority = "Spot"
eviction_policy = "Delete"
spot_max_price = -1 # Pay market price
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = "azure"
network_policy = "calico"
load_balancer_sku = "standard"
}
oms_agent {
log_analytics_workspace_id = azurerm_log_analytics_workspace.main.id
}
key_vault_secrets_provider {
secret_rotation_enabled = true
}
workload_identity_enabled = true
oidc_issuer_enabled = true
auto_scaler_profile {
scale_down_delay_after_add = "10m"
scale_down_unneeded = "5m"
}
}
# Additional node pool for GPU workloads
resource "azurerm_kubernetes_cluster_node_pool" "gpu" {
name = "gpu"
kubernetes_cluster_id = azurerm_kubernetes_cluster.main.id
vm_size = "Standard_NC6s_v3"
min_count = 0
max_count = 4
enable_auto_scaling = true
node_labels = {
"nvidia.com/gpu" = "true"
}
node_taints = [
"nvidia.com/gpu=present:NoSchedule"
]
}
AKS Pros & Cons
Pros:
- Free control plane
- Best Windows container support
- Azure AD integration out of the box
- KEDA (event-driven autoscaling) native
- Azure Arc for hybrid/multi-cloud
Cons:
- Slower to adopt new K8s features
- Azure networking can be confusing
- Less mature than GKE/EKS in some areas
Feature Deep Dive
Node Autoscaling
# EKS with Karpenter - most flexible
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values: ["spot", "on-demand"]
- key: kubernetes.io/arch
operator: In
values: ["amd64", "arm64"]
nodeClassRef:
name: default
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenUnderutilized
# GKE Autopilot - automatic, no config needed
# Just deploy pods, nodes appear
# AKS - cluster autoscaler + KEDA
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: queue-processor
spec:
scaleTargetRef:
name: processor
minReplicaCount: 0
maxReplicaCount: 100
triggers:
- type: azure-servicebus
metadata:
queueName: orders
messageCount: "5"
Load Balancer Integration
# EKS - AWS Load Balancer Controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 80
---
# GKE - GCE Ingress (built-in)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: api-ip
spec:
rules:
- host: api.example.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: api
port:
number: 80
---
# AKS - Azure Application Gateway
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 80
Workload Identity
# EKS - IRSA (IAM Roles for Service Accounts)
apiVersion: v1
kind: ServiceAccount
metadata:
name: api
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789:role/api-role
---
# GKE - Workload Identity
apiVersion: v1
kind: ServiceAccount
metadata:
name: api
annotations:
iam.gke.io/gcp-service-account: [email protected]
---
# AKS - Workload Identity
apiVersion: v1
kind: ServiceAccount
metadata:
name: api
annotations:
azure.workload.identity/client-id: "00000000-0000-0000-0000-000000000000"
Cost Comparison
For a typical production cluster (3 nodes, t3.large equivalent):
| Item | EKS | GKE Standard | GKE Autopilot | AKS |
|---|---|---|---|---|
| Control Plane | $73/mo | $0 | $0 | $0 |
| Nodes (on-demand) | ~$180/mo | ~$180/mo | Pay per pod | ~$180/mo |
| Nodes (spot) | ~$60/mo | ~$60/mo | ~$60/mo | ~$60/mo |
| Load Balancer | ~$20/mo | ~$20/mo | ~$20/mo | ~$20/mo |
| Total (spot) | ~$153/mo | ~$80/mo | Varies | ~$80/mo |
Decision Framework
Choose EKS when:
- You’re already deep in AWS
- You need Karpenter’s advanced scheduling
- Fargate for specific workloads
- Regulatory requirements favor AWS
Choose GKE when:
- You want the best Kubernetes experience
- Autopilot fits your workload
- Multi-cluster with Anthos is valuable
- You want latest K8s features first
Choose AKS when:
- You’re in Azure or using Azure services
- Windows containers are primary
- Azure AD integration is required
- Hybrid with Azure Arc
Key Takeaways
- GKE is the most polished — best DX, latest features, free control plane
- EKS has the ecosystem — deep AWS integration, Karpenter is excellent
- AKS is underrated — free, great Azure integration, best Windows support
- Control plane cost matters only at small scale — $73/mo is noise at scale
- All three work — pick based on your cloud, team skills, and ecosystem needs
- Autopilot/Fargate/Virtual Nodes reduce ops burden significantly
“The best managed Kubernetes is the one your team knows. Cloud-specific integrations matter more than K8s version numbers.”