Docker networking is one of those topics that seems simple until it isn’t. You start with docker run, everything works, then you need containers to talk to each other and suddenly you’re debugging iptables rules. Let’s fix that.

Network Drivers Overview

Docker supports multiple network drivers:

DriverUse CaseContainer-to-ContainerExternal Access
bridgeDefault, single hostYes (same network)Via port mapping
hostPerformance, no isolationN/A (host network)Direct
noneComplete isolationNoNo
overlayMulti-host (Swarm/K8s)Yes (across hosts)Via ingress
macvlanLegacy/physical network integrationYesDirect L2

Bridge Networks

Default Bridge

Every Docker installation has a default bridge network:

# List networks
docker network ls

# Inspect default bridge
docker network inspect bridge
# Containers on default bridge can communicate via IP
docker run -d --name web nginx
docker run -d --name api my-api

# Get web's IP
docker inspect web --format '{{.NetworkSettings.IPAddress}}'
# Output: 172.17.0.2

# From api, can reach web via IP (but not name!)
docker exec api curl http://172.17.0.2

Problem: Default bridge doesn’t have DNS. Containers can’t reach each other by name.

User-Defined Bridge Networks

# Create custom bridge network
docker network create my-network

# Run containers on custom network
docker run -d --name web --network my-network nginx
docker run -d --name api --network my-network my-api

# Now DNS works!
docker exec api curl http://web
# Or
docker exec api ping web

Custom Network Configuration

# Create with specific subnet
docker network create \
  --driver bridge \
  --subnet 10.10.0.0/16 \
  --gateway 10.10.0.1 \
  --ip-range 10.10.1.0/24 \
  my-network

# Run container with specific IP
docker run -d \
  --name web \
  --network my-network \
  --ip 10.10.1.100 \
  nginx

Connecting to Multiple Networks

# Container can be on multiple networks
docker network create frontend
docker network create backend

docker run -d --name api --network backend my-api

# Connect to additional network
docker network connect frontend api

# Now api can reach containers on both networks

Port Mapping

# Map host port to container port
docker run -d -p 8080:80 nginx
# Access: http://localhost:8080

# Map to specific interface
docker run -d -p 127.0.0.1:8080:80 nginx
# Only accessible from localhost

# Random host port
docker run -d -p 80 nginx
docker port <container_id>
# Output: 80/tcp -> 0.0.0.0:32768

# UDP port mapping
docker run -d -p 5000:5000/udp my-udp-app

Host Network Mode

Container shares the host’s network namespace:

# No network isolation - container uses host's network directly
docker run -d --network host nginx

# No port mapping needed - nginx listens on host's port 80
curl http://localhost

When to Use Host Network

  • Performance: No NAT overhead
  • Applications that need host IP: Some clustering software
  • Debugging: Easier to trace network issues

When NOT to Use Host Network

  • Production web services: No isolation
  • Multi-tenant environments: Security risk
  • Port conflicts: Can’t run multiple instances

None Network

Complete network isolation:

# No network interfaces except loopback
docker run -d --network none my-app

# Verify
docker exec <container_id> ip addr
# Only shows lo (loopback)

Use case: Security-sensitive batch processing that shouldn’t have network access.

Overlay Networks (Multi-Host)

Overlay networks span multiple Docker hosts. Requires Docker Swarm or external orchestration.

Initialize Swarm

# On manager node
docker swarm init --advertise-addr 192.168.1.100

# On worker nodes
docker swarm join --token SWMTKN-xxx 192.168.1.100:2377

Create Overlay Network

# Create overlay network
docker network create \
  --driver overlay \
  --attachable \
  my-overlay

# Deploy service
docker service create \
  --name web \
  --network my-overlay \
  --replicas 3 \
  nginx

Encrypted Overlay

# Enable encryption for overlay traffic
docker network create \
  --driver overlay \
  --opt encrypted \
  secure-overlay

Macvlan Networks

Assign MAC addresses to containers—appear as physical devices on the network:

# Create macvlan network
docker network create \
  --driver macvlan \
  --subnet 192.168.1.0/24 \
  --gateway 192.168.1.1 \
  -o parent=eth0 \
  my-macvlan

# Container gets IP on physical network
docker run -d \
  --name web \
  --network my-macvlan \
  --ip 192.168.1.200 \
  nginx

# Accessible directly on LAN at 192.168.1.200

Use case: Legacy applications that expect to be on the physical network segment.

Limitation: Host can’t communicate with macvlan containers without additional configuration.

Docker Compose Networking

Default Network

# docker-compose.yml
version: "3.9"

services:
  web:
    image: nginx
    ports:
      - "80:80"
  
  api:
    image: my-api
    # Can reach web by service name
    environment:
      - NGINX_URL=http://web:80
  
  db:
    image: postgres
    # Only api should reach db (see custom networks below)

# Compose creates a default network: project_default
# All services can reach each other by service name

Custom Networks

version: "3.9"

services:
  web:
    image: nginx
    networks:
      - frontend
    ports:
      - "80:80"
  
  api:
    image: my-api
    networks:
      - frontend
      - backend
    environment:
      - DB_HOST=db
  
  db:
    image: postgres
    networks:
      - backend
    # db is NOT on frontend - web can't reach it

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true  # No external access

External Networks

version: "3.9"

services:
  api:
    image: my-api
    networks:
      - existing-network

networks:
  existing-network:
    external: true  # Must already exist
    name: my-network

Network Troubleshooting

Inspect Network

# List containers on a network
docker network inspect my-network

# Check container's networks
docker inspect <container> --format '{{json .NetworkSettings.Networks}}' | jq

DNS Debugging

# Check DNS resolution
docker run --rm --network my-network busybox nslookup web

# Check DNS server
docker exec <container> cat /etc/resolv.conf

Network Connectivity

# Test connectivity from container
docker run --rm --network my-network nicolaka/netshoot \
  curl -v http://web:80

# Full network debugging toolkit
docker run -it --rm --network my-network nicolaka/netshoot

# Inside netshoot:
# ping web
# curl http://web
# traceroute web
# tcpdump -i eth0
# iptables -L -n

Host Network Debugging

# List Docker networks' iptables rules
sudo iptables -t nat -L -n | grep -i docker

# Check port forwarding
sudo iptables -t nat -L DOCKER -n

# Verify port is listening
sudo ss -tlnp | grep docker

Network Performance

Benchmark

# Server container
docker run -d --name iperf-server --network my-network networkstatic/iperf3 -s

# Client container
docker run --rm --network my-network networkstatic/iperf3 -c iperf-server

# Compare with host network
docker run -d --name iperf-host --network host networkstatic/iperf3 -s
docker run --rm --network host networkstatic/iperf3 -c localhost

MTU Configuration

# Set MTU on network
docker network create --driver bridge --opt com.docker.network.driver.mtu=9000 high-mtu-net

# Or in daemon.json
# /etc/docker/daemon.json
{
  "mtu": 1400
}

IPv6

# Enable IPv6 in Docker daemon
# /etc/docker/daemon.json
{
  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:1::/64"
}

# Create IPv6-enabled network
docker network create \
  --ipv6 \
  --subnet "2001:db8:2::/64" \
  my-ipv6-net

DNS Configuration

Custom DNS Server

# Per-container
docker run --dns 8.8.8.8 --dns 8.8.4.4 nginx

# Docker daemon default
# /etc/docker/daemon.json
{
  "dns": ["8.8.8.8", "8.8.4.4"]
}

DNS Search Domains

docker run --dns-search example.com nginx
# Container can resolve "web" as "web.example.com"

Custom /etc/hosts

docker run --add-host db:192.168.1.50 my-app
# Adds "192.168.1.50 db" to container's /etc/hosts

Security Considerations

Network Isolation

# Compose with isolated backend
version: "3.9"

services:
  web:
    networks:
      - frontend
  
  api:
    networks:
      - frontend
      - backend
  
  db:
    networks:
      - backend

networks:
  frontend:
  backend:
    internal: true  # Cannot reach external networks

Limit Exposed Ports

# Bind to localhost only
docker run -p 127.0.0.1:3000:3000 my-app

# Don't expose unnecessary ports in production

Disable Inter-Container Communication

# Prevent all inter-container traffic on bridge
# /etc/docker/daemon.json
{
  "icc": false
}

# Only linked containers can communicate

Common Patterns

Reverse Proxy Pattern

version: "3.9"

services:
  nginx:
    image: nginx
    ports:
      - "80:80"
      - "443:443"
    networks:
      - frontend
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
  
  app1:
    image: app1
    networks:
      - frontend
    expose:
      - "3000"  # Only internal, no host port
  
  app2:
    image: app2
    networks:
      - frontend
    expose:
      - "3000"

networks:
  frontend:

Database Isolation

version: "3.9"

services:
  api:
    networks:
      - app
      - db
  
  postgres:
    networks:
      - db
  
  redis:
    networks:
      - db

networks:
  app:
  db:
    internal: true  # No external access

When to Use Each Driver

ScenarioDriver
Local developmentbridge (user-defined)
Single container, need performancehost
Multi-container appbridge (user-defined)
Cross-host communicationoverlay
Legacy app needs physical networkmacvlan
Maximum isolationnone

Key Takeaways

  1. Always use user-defined bridge networks — default bridge has no DNS
  2. Bind to localhost for development — don’t expose to 0.0.0.0 unnecessarily
  3. Use internal networks for databases — no external access needed
  4. Host network trades isolation for performance — use sparingly
  5. Overlay networks require Swarm — or use Kubernetes networking instead
  6. Use netshoot for debugging — it has all the tools you need

Docker networking is the foundation of container communication. Get it right, and your multi-container applications just work. Get it wrong, and you’ll spend hours debugging iptables rules.