Skip to main content

Network Drivers

Every Docker container connects to a network driver that determines how it communicates. The driver you choose affects isolation, performance, and how services discover each other.

The Four Drivers

flowchart TD
subgraph bridge["Bridge (Default)"]
direction LR
B1["Container A"] <--> B2["Container B"]
end
subgraph host["Host"]
H1["Container shares<br/>host network stack"]
end
subgraph none["None"]
N1["Container has<br/>no network"]
end
subgraph macvlan["Macvlan"]
M1["Container gets own<br/>MAC + IP on LAN"]
end

style bridge fill:#e3f2fd,stroke:#1565c0
style host fill:#fff3e0,stroke:#ef6c00
style none fill:#f5f5f5,stroke:#9e9e9e
style macvlan fill:#e8f5e9,stroke:#2e7d32
DriverHow It WorksBest For
bridgeIsolated virtual network on a single host. Containers communicate via Docker DNSMost workloads. Default and recommended
hostContainer shares the host's network namespace directly. No port mapping neededPerformance-critical apps needing minimal network overhead
noneContainer has no network interfaces (only loopback)Isolated batch jobs, security-sensitive processing
macvlanContainer gets its own MAC address and appears as a physical device on the LANLegacy integrations requiring direct L2 network access

Bridge Networks (Use This by Default)

Bridge is the standard choice. Containers on the same bridge network can communicate, and are isolated from containers on other networks.

Default Bridge vs User-Defined Bridge

FeatureDefault bridgeUser-Defined Bridge
DNS resolution by nameNo (must use --link, deprecated)Yes (automatic)
Isolation between groupsAll containers on one networkCreate separate networks per group
Connect/disconnect liveNoYes

Always create a user-defined bridge instead of using the default:

# Create a network
docker network create app-net

# Run containers on it
docker run -d --name api --network app-net my-api:1.0.0
docker run -d --name db --network app-net postgres:16

# api can reach db by name:
docker exec api ping db

Visualizing a Bridge Network

flowchart TD
subgraph Host["Docker Host (192.168.1.100)"]
subgraph Bridge["app-net (172.18.0.0/16)"]
C1["api (172.18.0.2)"]
C2["db (172.18.0.3)"]
end
ETH0["Host Interface"]
end
Internet <--> ETH0
ETH0 <-->|"Published ports only"| Bridge
C1 <--> C2

Host Network

Removes the network isolation layer. The container uses the host's IP and ports directly:

docker run -d --network host nginx
# nginx is now listening on the host's port 80 directly
AdvantageDisadvantage
No port mapping overheadNo port isolation -- conflicts with host services
Slightly lower latencyCannot run two containers on the same port
Simpler for debuggingReduced security (no network namespace)

Use host mode only when you have a specific performance or architectural need, not as a convenience shortcut.

None Network

Disables all networking. The container only has a loopback interface:

docker run --rm --network none alpine ip addr
# Only shows lo (127.0.0.1)

Use for jobs that should never make network connections.

Network Segmentation Pattern

Use multiple bridge networks to control which services can communicate:

services:
web:
image: nginx:alpine
networks: [front, back]
ports:
- "80:80"
api:
image: my-api:1.0.0
networks: [back]
db:
image: postgres:16
networks: [back]

networks:
front: {}
back: {}

In this setup, web can reach api and db, but external traffic can only reach web. The database is never exposed.

Managing Networks

# List all networks
docker network ls

# Create a network
docker network create my-net

# Inspect a network (see connected containers)
docker network inspect my-net

# Connect a running container to a network
docker network connect my-net my-container

# Disconnect a container
docker network disconnect my-net my-container

# Remove a network (must have no connected containers)
docker network rm my-net

Key Takeaways

  • User-defined bridge is the right default for almost all workloads -- it provides DNS, isolation, and easy management.
  • The default bridge network lacks DNS discovery. Always create your own.
  • Use host mode only for specific performance needs, not convenience.
  • Use multiple networks to segment services (frontend/backend) and limit blast radius.
  • Use docker network inspect to verify which containers are connected.

What's Next