RDS vs Cloud SQL vs Azure SQL: Managed Databases
Compare managed relational database services across AWS, GCP, and Azure. Learn about engines, pricing, high availability, and migration strategies.
Managed databases free you from patching, backups, and replication headaches. But each cloud has different strengths — AWS RDS offers the most engine choices, Cloud SQL provides simplicity, and Azure SQL delivers the best SQL Server experience. Let’s compare them.
Feature Comparison
| Feature | AWS RDS | Google Cloud SQL | Azure SQL Database |
|---|---|---|---|
| Engines | MySQL, PostgreSQL, MariaDB, Oracle, SQL Server, Db2 | MySQL, PostgreSQL, SQL Server | SQL Server only* |
| Max Storage | 128 TB (Aurora) | 64 TB | 100 TB (Hyperscale) |
| Read Replicas | 15 (Aurora), 5 (others) | 10 | 4 (geo-replicas) |
| Serverless | Aurora Serverless | No | Yes (vCore) |
| Multi-AZ | Yes | Regional HA | Zone-redundant |
| Cross-Region | Yes | Yes | Geo-replication |
*Azure also offers Azure Database for MySQL/PostgreSQL as separate services
AWS RDS
Creating a PostgreSQL Instance
resource "aws_db_instance" "main" {
identifier = "my-app-db"
engine = "postgres"
engine_version = "16.1"
instance_class = "db.r6g.large"
allocated_storage = 100
storage_type = "gp3"
storage_encrypted = true
kms_key_id = aws_kms_key.rds.arn
db_name = "myapp"
username = "admin"
password = var.db_password
multi_az = true
db_subnet_group_name = aws_db_subnet_group.main.name
vpc_security_group_ids = [aws_security_group.rds.id]
backup_retention_period = 30
backup_window = "03:00-04:00"
maintenance_window = "Mon:04:00-Mon:05:00"
auto_minor_version_upgrade = true
deletion_protection = true
skip_final_snapshot = false
final_snapshot_identifier = "my-app-db-final"
performance_insights_enabled = true
performance_insights_retention_period = 7
enabled_cloudwatch_logs_exports = ["postgresql", "upgrade"]
parameter_group_name = aws_db_parameter_group.main.name
tags = var.tags
}
resource "aws_db_parameter_group" "main" {
family = "postgres16"
name = "my-app-postgres16"
parameter {
name = "log_statement"
value = "all"
}
parameter {
name = "log_min_duration_statement"
value = "1000" # Log queries > 1 second
}
parameter {
name = "shared_preload_libraries"
value = "pg_stat_statements"
}
}
resource "aws_db_subnet_group" "main" {
name = "my-app-db"
subnet_ids = var.private_subnet_ids
tags = var.tags
}
Aurora Serverless v2
resource "aws_rds_cluster" "aurora" {
cluster_identifier = "my-app-aurora"
engine = "aurora-postgresql"
engine_mode = "provisioned"
engine_version = "16.1"
database_name = "myapp"
master_username = "admin"
master_password = var.db_password
serverlessv2_scaling_configuration {
min_capacity = 0.5
max_capacity = 16
}
db_subnet_group_name = aws_db_subnet_group.main.name
vpc_security_group_ids = [aws_security_group.rds.id]
storage_encrypted = true
kms_key_id = aws_kms_key.rds.arn
backup_retention_period = 30
deletion_protection = true
}
resource "aws_rds_cluster_instance" "aurora" {
count = 2
identifier = "my-app-aurora-${count.index}"
cluster_identifier = aws_rds_cluster.aurora.id
instance_class = "db.serverless"
engine = aws_rds_cluster.aurora.engine
engine_version = aws_rds_cluster.aurora.engine_version
}
Read Replicas for Scaling Reads
resource "aws_db_instance" "replica" {
count = 2
identifier = "my-app-db-replica-${count.index}"
replicate_source_db = aws_db_instance.main.identifier
instance_class = "db.r6g.large"
# Replicas can be in different AZs
availability_zone = element(var.azs, count.index)
# Performance Insights for replicas too
performance_insights_enabled = true
tags = merge(var.tags, { Role = "replica" })
}
Google Cloud SQL
Creating a PostgreSQL Instance
resource "google_sql_database_instance" "main" {
name = "my-app-db"
database_version = "POSTGRES_16"
region = "us-central1"
settings {
tier = "db-custom-4-16384" # 4 vCPU, 16GB RAM
availability_type = "REGIONAL" # High availability
disk_type = "PD_SSD"
disk_size = 100
disk_autoresize = true
backup_configuration {
enabled = true
point_in_time_recovery_enabled = true
start_time = "03:00"
transaction_log_retention_days = 7
backup_retention_settings {
retained_backups = 30
}
}
ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.main.id
authorized_networks {
name = "office"
value = var.office_ip
}
}
database_flags {
name = "log_statement"
value = "all"
}
database_flags {
name = "log_min_duration_statement"
value = "1000"
}
insights_config {
query_insights_enabled = true
query_string_length = 1024
record_application_tags = true
record_client_address = true
}
maintenance_window {
day = 1 # Monday
hour = 4
update_track = "stable"
}
}
deletion_protection = true
}
resource "google_sql_database" "main" {
name = "myapp"
instance = google_sql_database_instance.main.name
}
resource "google_sql_user" "admin" {
name = "admin"
instance = google_sql_database_instance.main.name
password = var.db_password
}
Read Replicas
resource "google_sql_database_instance" "replica" {
name = "my-app-db-replica"
database_version = "POSTGRES_16"
region = "us-east1" # Cross-region replica
master_instance_name = google_sql_database_instance.main.name
replica_configuration {
failover_target = false
}
settings {
tier = "db-custom-4-16384"
availability_type = "ZONAL"
disk_type = "PD_SSD"
disk_size = 100
ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.main.id
}
}
}
Azure SQL Database
Creating a SQL Database
resource "azurerm_mssql_server" "main" {
name = "my-app-sql"
resource_group_name = azurerm_resource_group.main.name
location = azurerm_resource_group.main.location
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = var.db_password
minimum_tls_version = "1.2"
azuread_administrator {
login_username = "AzureAD Admin"
object_id = var.azure_ad_admin_object_id
}
identity {
type = "SystemAssigned"
}
tags = var.tags
}
resource "azurerm_mssql_database" "main" {
name = "myapp"
server_id = azurerm_mssql_server.main.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
# vCore model
sku_name = "GP_Gen5_4" # General Purpose, 4 vCores
max_size_gb = 100
# Or use DTU model
# sku_name = "S3" # Standard, 100 DTUs
zone_redundant = true
short_term_retention_policy {
retention_days = 30
backup_interval_in_hours = 12
}
long_term_retention_policy {
weekly_retention = "P4W"
monthly_retention = "P12M"
yearly_retention = "P5Y"
week_of_year = 1
}
threat_detection_policy {
state = "Enabled"
email_addresses = [var.security_email]
retention_days = 30
storage_endpoint = azurerm_storage_account.logs.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.logs.primary_access_key
}
tags = var.tags
}
# Firewall rules
resource "azurerm_mssql_firewall_rule" "azure_services" {
name = "AllowAzureServices"
server_id = azurerm_mssql_server.main.id
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
# Private endpoint
resource "azurerm_private_endpoint" "sql" {
name = "sql-private-endpoint"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
subnet_id = azurerm_subnet.private.id
private_service_connection {
name = "sql-connection"
private_connection_resource_id = azurerm_mssql_server.main.id
is_manual_connection = false
subresource_names = ["sqlServer"]
}
}
Serverless Tier
resource "azurerm_mssql_database" "serverless" {
name = "myapp-serverless"
server_id = azurerm_mssql_server.main.id
# Serverless configuration
sku_name = "GP_S_Gen5_2"
min_capacity = 0.5
auto_pause_delay_in_minutes = 60 # Pause after 60 min idle
max_size_gb = 100
tags = var.tags
}
Pricing Comparison
Equivalent Instances (4 vCPU, 16GB RAM, 100GB SSD)
| Service | Instance Type | Price/Month |
|---|---|---|
| RDS PostgreSQL | db.r6g.large | ~$280 |
| Cloud SQL | db-custom-4-16384 | ~$340 |
| Azure SQL | GP_Gen5_4 | ~$740 |
*Prices vary by region and reserved pricing options
Reserved Capacity Savings
AWS RDS:
- 1-year reserved: ~30% savings
- 3-year reserved: ~50% savings
Google Cloud SQL:
- Committed use: ~25% savings (1-year)
- Committed use: ~50% savings (3-year)
Azure SQL:
- Reserved capacity: ~33% savings (1-year)
- Reserved capacity: ~65% savings (3-year)
High Availability Comparison
RDS Multi-AZ
Primary → Synchronous replication → Standby (different AZ)
Automatic failover (60-120 seconds)
Cloud SQL Regional HA
Primary → Synchronous replication → Standby (different zone)
Automatic failover (~120 seconds)
Azure Zone-Redundant
Primary → Synchronous commit → Replicas (3 zones)
Automatic failover (~30 seconds)
Migration Strategies
AWS Database Migration Service
resource "aws_dms_replication_instance" "main" {
replication_instance_id = "my-migration"
replication_instance_class = "dms.r5.large"
allocated_storage = 100
publicly_accessible = false
vpc_security_group_ids = [aws_security_group.dms.id]
replication_subnet_group_id = aws_dms_replication_subnet_group.main.id
}
resource "aws_dms_endpoint" "source" {
endpoint_id = "source-onprem"
endpoint_type = "source"
engine_name = "postgres"
server_name = var.source_db_host
port = 5432
database_name = "myapp"
username = var.source_db_user
password = var.source_db_password
ssl_mode = "require"
}
resource "aws_dms_endpoint" "target" {
endpoint_id = "target-rds"
endpoint_type = "target"
engine_name = "postgres"
server_name = aws_db_instance.main.address
port = 5432
database_name = "myapp"
username = "admin"
password = var.db_password
}
resource "aws_dms_replication_task" "migrate" {
replication_task_id = "migrate-to-rds"
replication_instance_arn = aws_dms_replication_instance.main.replication_instance_arn
source_endpoint_arn = aws_dms_endpoint.source.endpoint_arn
target_endpoint_arn = aws_dms_endpoint.target.endpoint_arn
migration_type = "full-load-and-cdc"
table_mappings = jsonencode({
rules = [{
rule-type = "selection"
rule-id = "1"
rule-name = "all-tables"
object-locator = {
schema-name = "%"
table-name = "%"
}
rule-action = "include"
}]
})
}
Connection Pooling
PgBouncer for PostgreSQL
# docker-compose.yml
services:
pgbouncer:
image: bitnami/pgbouncer:latest
environment:
- POSTGRESQL_HOST=${DB_HOST}
- POSTGRESQL_PORT=5432
- POSTGRESQL_DATABASE=myapp
- POSTGRESQL_USERNAME=admin
- POSTGRESQL_PASSWORD=${DB_PASSWORD}
- PGBOUNCER_POOL_MODE=transaction
- PGBOUNCER_MAX_CLIENT_CONN=1000
- PGBOUNCER_DEFAULT_POOL_SIZE=20
ports:
- "6432:6432"
RDS Proxy
resource "aws_db_proxy" "main" {
name = "my-app-proxy"
debug_logging = false
engine_family = "POSTGRESQL"
idle_client_timeout = 1800
require_tls = true
role_arn = aws_iam_role.proxy.arn
vpc_security_group_ids = [aws_security_group.proxy.id]
vpc_subnet_ids = var.private_subnet_ids
auth {
auth_scheme = "SECRETS"
iam_auth = "DISABLED"
secret_arn = aws_secretsmanager_secret.db.arn
}
}
resource "aws_db_proxy_default_target_group" "main" {
db_proxy_name = aws_db_proxy.main.name
connection_pool_config {
max_connections_percent = 100
max_idle_connections_percent = 50
connection_borrow_timeout = 120
}
}
Key Takeaways
- RDS offers the most flexibility — 6 engines, Aurora for high performance
- Cloud SQL is simplest — great for PostgreSQL/MySQL workloads
- Azure SQL is best for SQL Server — native features, AD integration
- Use connection pooling — essential for serverless and high-connection scenarios
- Plan for HA from day one — Multi-AZ/Regional HA costs ~2x but worth it
“A database is the heart of your application. Spend the money on HA — explaining downtime to users costs more than the extra instance.”