Docker Networking Deep Dive: Bridge vs Host vs Macvlan vs Overlay (Advanced Guide)
Docker networking looks simple at first—containers get an IP, ports get published, and traffic flows. In advanced setups, though, the choice of network driver (bridge, host, macvlan, overlay) affects performance, isolation, observability, routing, service discovery, and even how your LAN sees your containers.
This guide goes deep into how each driver works, when to use it, and how to validate behavior with real commands.
Table of Contents
- Prerequisites and Lab Setup
- Mental Model: Namespaces, veth pairs, and Linux bridges
- Inspecting Docker Networking Like a Pro
- Bridge Networks (default and user-defined)
- Host Networking
- Macvlan Networking
- Overlay Networking (Swarm)
- Performance, Isolation, and Use-Case Matrix
- Hands-on Mini Labs
- Cleanup
Prerequisites and Lab Setup
This tutorial assumes:
- Linux host (Ubuntu/Debian/CentOS/etc.) with Docker Engine installed.
- You have root or
sudoaccess. - Basic familiarity with containers and
docker run.
Check versions:
docker version
docker info
uname -a
Install a few tools (optional but strongly recommended):
sudo apt-get update
sudo apt-get install -y iproute2 iptables tcpdump dnsutils net-tools
Notes:
- On newer distros, Docker may use
nftablesunder the hood;iptables -Lstill often works via compatibility layers.- If you’re on macOS/Windows Docker Desktop, networking behaves differently because containers run inside a VM. Bridge/host/macvlan semantics won’t match Linux exactly. Overlay still works in Swarm, but with extra layers.
Mental Model: Namespaces, veth pairs, and Linux bridges
Understanding Docker networking starts with Linux networking primitives:
- Network namespace: each container typically gets its own isolated network stack (interfaces, routes, iptables).
- veth pair: a virtual ethernet cable with two ends. One end is placed in the container namespace (e.g.,
eth0), the other stays on the host (often named likevethXXXX). - Linux bridge: a virtual switch on the host (e.g.,
docker0for the default bridge). It forwards frames between veth endpoints. - NAT (masquerading): allows container IPs (private subnets) to reach external networks through the host’s IP.
- Port publishing:
-p 8080:80adds DNAT rules so traffic to host port 8080 is forwarded to container port 80.
A key idea: bridge networking is L2 inside the host, then typically L3/NAT to the outside world. Host networking removes the namespace boundary. Macvlan puts containers directly on your LAN at L2. Overlay creates a virtual L2/L3 network across multiple hosts, typically via VXLAN encapsulation.
Inspecting Docker Networking Like a Pro
List Docker networks:
docker network ls
Inspect a network:
docker network inspect bridge
Inspect a container’s network settings:
docker run -d --name ntest nginx:alpine
docker inspect ntest --format '{{json .NetworkSettings}}' | jq .
Inside a container, check interfaces and routes:
docker exec -it ntest sh -c "ip a; echo '---'; ip route; echo '---'; cat /etc/resolv.conf"
On the host, see Docker-created links and bridges:
ip link show
ip addr show
ip route show
See iptables rules Docker installs (classic view):
sudo iptables -t nat -S
sudo iptables -S
Capture traffic (example: see DNS queries from a container):
sudo tcpdump -i any -n port 53
Bridge Networks (default and user-defined)
Bridge networking is the most common mode for single-host Docker.
Default bridge vs user-defined bridge
Docker creates a default bridge network named bridge backed by a Linux bridge interface docker0.
- Default bridge:
- Older behavior and limitations.
- Containers can reach each other by IP, but name-based discovery is limited (unless using legacy
--link).
- User-defined bridge:
- Built-in DNS-based service discovery (containers can resolve each other by name).
- Better isolation and easier management.
- You can define subnets, gateways, and options.
Create a user-defined bridge:
docker network create mybr
docker network inspect mybr
Run two containers on it:
docker run -d --name web --network mybr nginx:alpine
docker run -it --rm --name client --network mybr alpine:3.20 sh
Inside client, test DNS and connectivity:
apk add --no-cache curl bind-tools
nslookup web
curl -I http://web
You should see web resolve to an IP on the mybr subnet.
Port publishing and NAT
Bridge networks usually require port publishing to accept inbound traffic from outside the host.
Run nginx and publish port 8080 on the host:
docker run -d --name pubweb --network mybr -p 8080:80 nginx:alpine
curl -I http://127.0.0.1:8080
What happens under the hood:
- Docker adds NAT rules so traffic to
HOST_IP:8080is DNAT’ed toCONTAINER_IP:80. - Outbound traffic from containers is SNAT/masqueraded so it appears to come from the host.
Look at NAT rules:
sudo iptables -t nat -S | sed -n '1,200p'
You’ll typically find chains like DOCKER and rules referencing the container IP.
DNS and service discovery on user-defined bridge
Docker runs an embedded DNS server for user-defined networks (commonly at 127.0.0.11 inside containers). This enables:
- Name resolution by container name
- Network aliases
- Round-robin responses for multiple containers with the same alias (useful for simple load distribution)
Example with aliases:
docker run -d --name api1 --network mybr --network-alias api hashicorp/http-echo -text="api1"
docker run -d --name api2 --network mybr --network-alias api hashicorp/http-echo -text="api2"
docker run -it --rm --network mybr alpine:3.20 sh -c 'apk add --no-cache curl bind-tools; for i in $(seq 1 6); do curl -s http://api:5678; echo; done'
You may see responses alternate depending on DNS caching behavior.
Tuning bridge networks
Create a bridge with a custom subnet and gateway:
docker network create \
--driver bridge \
--subnet 172.28.0.0/16 \
--gateway 172.28.0.1 \
mybr2
Attach a container and check its IP:
docker run -it --rm --network mybr2 alpine:3.20 sh -c "ip -4 addr show eth0; ip route"
You can also control whether containers can reach the outside world (via --internal):
docker network create --internal isolatedbr
docker run -it --rm --network isolatedbr alpine:3.20 sh -c "ip route; ping -c 1 1.1.1.1 || true"
An internal network prevents external routing/NAT by design.
Common pitfalls
-
“I can’t reach container IP from another machine”
Container IPs on bridge networks are usually private to the host. Use-pto publish ports, or use macvlan/overlay depending on your needs. -
Port conflicts
-p 80:80fails if the host already uses port 80. Use a different host port or host networking carefully. -
Hairpin NAT / accessing published ports from the same host
Usually works, but can be affected by firewall rules and distro defaults.
Host Networking
What it really means
Host networking means the container does not get its own network namespace. It shares the host’s network stack:
- No separate container IP (it uses the host’s interfaces).
- No port publishing (
-p) needed or allowed in the same way—services bind directly to host ports. - Very low overhead: no veth, no bridge, less NAT.
Run a container with host networking:
docker run -d --name hostweb --network host nginx:alpine
Now nginx is listening on the host network. Test:
curl -I http://127.0.0.1:80
If port 80 is already in use on the host, the container will fail (or nginx will fail to start).
Pros/cons and security implications
Pros
- Best performance and lowest latency (no NAT/bridge overhead).
- Simplifies some network-heavy apps (e.g., packet capture, routing daemons, certain monitoring agents).
- Useful when you want the container to behave like a native host process.
Cons
- Reduced isolation: container can see host interfaces and may interact with host network services.
- Port collisions are common; you must manage ports carefully.
- Harder to run multiple instances of the same service on standard ports.
Security note: host networking increases blast radius. Combine with least-privilege settings (drop capabilities, read-only FS, etc.) when possible.
Validation commands
Compare network namespaces:
docker run -d --name bridged --network bridge nginx:alpine
docker run -d --name hosted --network host nginx:alpine
# On the host, list listening ports:
sudo ss -lntp | sed -n '1,120p'
For the bridged container with -p, you’d see Docker proxying/NAT behavior. For host mode, nginx binds directly.
Macvlan Networking
Why macvlan exists
Macvlan lets containers appear as first-class devices on your physical LAN:
- Each container gets its own IP on the LAN subnet.
- Each container gets its own MAC address.
- Other machines on the LAN can reach containers directly—no port publishing required.
This is useful for:
- Legacy apps that require being on the same L2 network.
- Network appliances (DHCP/TFTP in lab contexts, monitoring sensors).
- Avoiding NAT and simplifying inbound routing.
Modes and the “host can’t talk to container” issue
Macvlan has multiple modes; the most common is bridge mode. A classic gotcha:
- With macvlan, the host typically cannot directly communicate with its macvlan containers via the parent interface. This is a Linux macvlan behavior (host and macvlan endpoints are isolated at L2 on the same parent).
You can work around it by creating a macvlan sub-interface on the host and routing through it (shown below).
Creating a macvlan network (bridge mode)
You need:
- A parent interface connected to your LAN (e.g.,
eth0). - A subnet on your LAN you can allocate container IPs from.
- Ideally, reserve a small IP range to avoid conflicts with DHCP.
Example (adjust to your environment):
- LAN subnet:
192.168.10.0/24 - Gateway/router:
192.168.10.1 - Parent interface:
eth0 - Reserved IP range for containers:
192.168.10.200-192.168.10.240
Create the macvlan network:
docker network create -d macvlan \
--subnet=192.168.10.0/24 \
--gateway=192.168.10.1 \
--ip-range=192.168.10.200/29 \
-o parent=eth0 \
maclan0
Run a container on it:
docker run -d --name lanweb --network maclan0 nginx:alpine
docker inspect lanweb --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
From another machine on the same LAN, you should be able to:
curl -I http://192.168.10.200
No -p required because the container is directly on the LAN.
Fixing host-to-macvlan-container connectivity
If the host cannot reach lanweb by its macvlan IP, add a macvlan interface on the host:
- Create a macvlan interface linked to the same parent:
sudo ip link add macvlan-host link eth0 type macvlan mode bridge
- Assign an IP in the macvlan subnet (choose an unused IP in your reserved range or outside DHCP):
sudo ip addr add 192.168.10.250/24 dev macvlan-host
sudo ip link set macvlan-host up
- Add a route (often not strictly necessary if the IP is on-link, but explicit routing can help clarity):
sudo ip route add 192.168.10.200/29 dev macvlan-host
Now test from the host:
curl -I http://192.168.10.200
ping -c 2 192.168.10.200
If it still fails, check:
- Your switch port security / MAC limits (macvlan creates multiple MACs on one physical port).
- VLAN configuration.
- Firewall rules.
Operational considerations
- IP management: Docker won’t coordinate with your DHCP server. Use
--ip-range, static IPs, or DHCP reservations carefully. - Switch limitations: Some managed switches limit the number of MAC addresses per port (port security). Macvlan can trip this.
- Promiscuous mode: Some environments require it for multiple MACs on one NIC.
- Not ideal for laptops/Wi-Fi: Many Wi-Fi drivers/APs don’t like multiple MACs per client; macvlan may not work reliably on wireless.
Overlay Networking (Swarm)
Overlay networking is designed for multi-host container networking. It creates a virtual network spanning multiple Docker hosts, allowing containers/services on different nodes to communicate as if on the same subnet.
Overlay networks are most commonly used with Docker Swarm (built into Docker Engine). Kubernetes uses different networking (CNI plugins), but the underlying concepts (encapsulation, service discovery) are similar.
How overlay works (VXLAN, encryption, control plane)
Key concepts:
- VXLAN encapsulation: Container packets are wrapped inside UDP (often port 4789) and forwarded between hosts. This forms a virtual L2 segment across L3 infrastructure.
- Control plane: Nodes exchange network state (who has which endpoints) using Swarm’s management plane.
- Service discovery: Swarm provides internal DNS and VIP-based load balancing.
- Encryption (optional):
--opt encryptedenables IPsec encryption for overlay traffic (adds overhead but protects traffic on untrusted networks).
Ports typically involved (ensure firewalls allow them between nodes):
- TCP 2377 (Swarm management)
- TCP/UDP 7946 (node communication)
- UDP 4789 (VXLAN data plane)
Creating an overlay network and services
Initialize Swarm (on the first node):
docker swarm init
Create an overlay network:
docker network create -d overlay --attachable ovl0
docker network ls
docker network inspect ovl0
Deploy a service attached to the overlay:
docker service create --name web --network ovl0 --replicas 3 nginx:alpine
docker service ls
docker service ps web
Create a client container and test DNS/service access (attachable allows standalone containers to join):
docker run -it --rm --network ovl0 alpine:3.20 sh -c "apk add --no-cache curl bind-tools; nslookup web; curl -I http://web"
Even if tasks are on different nodes, web resolves and routes correctly.
Routing mesh vs VIP vs DNSRR
Swarm has multiple traffic patterns:
- Routing mesh (ingress network)
When you publish ports on a service, Swarm can expose it on every node and route to active tasks.
Example:
docker service create --name pubweb --replicas 2 -p 8080:80 nginx:alpine
Now http://ANY_NODE_IP:8080 should reach the service, even if the task isn’t on that node.
- VIP-based internal load balancing (default)
Services get a virtual IP (VIP). DNS resolves service name to VIP; IPVS load balances to tasks.
Inspect endpoint mode:
docker service inspect web --format '{{json .Endpoint.Spec}}' | jq .
- DNS round-robin (DNSRR)
Instead of a VIP, DNS returns task IPs directly. Useful when you want client-side load balancing or need source IP preservation in some cases.
Create DNSRR service:
docker service create --name api --network ovl0 \
--endpoint-mode dnsrr \
--replicas 3 \
hashicorp/http-echo -text="hello"
Query DNS:
docker run -it --rm --network ovl0 alpine:3.20 sh -c "apk add --no-cache bind-tools; for i in $(seq 1 5); do dig +short api; done"
Troubleshooting overlay
Check Swarm status:
docker node ls
docker info | sed -n '1,120p'
Inspect networks and tasks:
docker network inspect ovl0 | jq '.[0].Peers, .[0].Containers'
docker service ps web
Look for blocked ports between nodes:
# On each node, verify listening ports:
sudo ss -lntup | egrep '2377|7946|4789' || true
Capture VXLAN traffic (between nodes):
sudo tcpdump -i any -n udp port 4789
If encryption is enabled:
docker network create -d overlay --opt encrypted --attachable ovlenc
Expect lower throughput and higher CPU usage—measure if performance matters.
Performance, Isolation, and Use-Case Matrix
| Driver | Scope | Inbound from LAN/Internet | Name discovery | Isolation | Performance | Typical use |
|---|---|---|---|---|---|---|
| bridge | single-host | via -p (NAT) | strong on user-defined | good | good | most single-host apps |
| host | single-host | direct on host ports | N/A (host net) | low | best | high-performance, monitoring, special cases |
| macvlan | single-host (LAN) | direct (no NAT) | limited to Docker DNS within network | good between containers; host isolation caveat | very good | containers as “real” LAN hosts |
| overlay | multi-host | via routing mesh or LB | strong (Swarm) | good | good (overhead from encapsulation) | multi-node services |
Rules of thumb:
- Choose user-defined bridge by default for single-host microservices.
- Choose host only when you need it and accept reduced isolation.
- Choose macvlan when containers must be reachable on the physical LAN with their own IPs.
- Choose overlay for multi-host networking in Swarm (or when you need cross-host service discovery and connectivity).
Hands-on Mini Labs
These labs reinforce the differences with observable behavior.
Lab 1: Observe veth + bridge behavior (bridge network)
- Create network and container:
docker network create labbr
docker run -d --name labng --network labbr nginx:alpine
- On host, find the container interface mapping:
docker exec -it labng sh -c "ip link show eth0; ip -4 addr show eth0"
- Inspect host-side veth:
# List veth devices (names vary):
ip link show | grep -E 'veth|docker'
- Inspect the Linux bridge:
# docker0 is for default bridge; user-defined networks create br-<id>
ip link show type bridge
bridge link
bridge fdb show | head
You should see the veth attached to a br-... interface.
Lab 2: Prove host networking shares ports
- Run a service in host mode:
docker run -d --name hosthttp --network host nginx:alpine
- Confirm nginx listens on the host:
sudo ss -lntp | grep ':80' || true
curl -I http://127.0.0.1
- Try running a second nginx host-mode container:
docker run -d --name hosthttp2 --network host nginx:alpine
docker logs hosthttp2 | tail -n 50
The second one typically fails because port 80 is already in use.
Lab 3: Macvlan direct LAN reachability
- Create macvlan network (adjust values):
docker network create -d macvlan \
--subnet=192.168.10.0/24 \
--gateway=192.168.10.1 \
--ip-range=192.168.10.200/29 \
-o parent=eth0 \
labmac
- Run a container:
docker run -d --name macng --network labmac nginx:alpine
MACIP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' macng)
echo "$MACIP"
- From another LAN machine:
curl -I "http://$MACIP"
- If host can’t reach it, add host macvlan interface (as shown earlier) and retry.
Lab 4: Overlay across nodes (Swarm)
On manager:
docker swarm init
docker network create -d overlay --attachable labovl
docker service create --name who --network labovl --replicas 3 hashicorp/http-echo -text="whoami"
On any node (manager or worker), run a client:
docker run -it --rm --network labovl alpine:3.20 sh -c "apk add --no-cache curl; for i in $(seq 1 5); do curl -s who:5678; echo; done"
Add a published port via routing mesh:
docker service update --publish-add 9090:5678 who
curl -s http://127.0.0.1:9090
Try from another node’s IP as well; routing mesh should forward.
Cleanup
Remove containers/services/networks created in this tutorial.
docker rm -f ntest pubweb web api1 api2 bridged hosted hostweb hosthttp hosthttp2 labng lanweb macng 2>/dev/null || true
docker service rm web pubweb api who 2>/dev/null || true
docker network rm mybr mybr2 isolatedbr labbr maclan0 labmac ovl0 ovlenc labovl 2>/dev/null || true
If you initialized Swarm and want to leave it:
docker swarm leave --force
If you created a host macvlan interface:
sudo ip link del macvlan-host 2>/dev/null || true
Final Guidance: Choosing the Right Driver
- Use user-defined bridge for most single-host stacks: good isolation, built-in DNS, easy port publishing.
- Use host when you need raw performance or must bind to host networking directly, and you can manage the security/port tradeoffs.
- Use macvlan when containers must be directly addressable on your LAN with their own IP/MAC, and your network equipment supports it.
- Use overlay when you need multi-host container connectivity with service discovery and (optionally) encrypted transport.
If you share your environment (single host vs multi-host, LAN constraints, firewall rules, whether you need inbound access without port publishing), I can propose an exact network design and command set tailored to it.