← Back to Tutorials

Docker Networking Deep Dive: Bridge vs Host vs Macvlan vs Overlay (Advanced Guide)

docker networkingbridge networkhost networkmacvlanoverlay networkcontainer networkinglinux networkingiptablesvxlanservice discoveryswarm modeadvanced docker

Docker Networking Deep Dive: Bridge vs Host vs Macvlan vs Overlay (Advanced Guide)

Docker networking looks simple at first—containers get an IP, ports get published, and traffic flows. In advanced setups, though, the choice of network driver (bridge, host, macvlan, overlay) affects performance, isolation, observability, routing, service discovery, and even how your LAN sees your containers.

This guide goes deep into how each driver works, when to use it, and how to validate behavior with real commands.


Table of Contents


Prerequisites and Lab Setup

This tutorial assumes:

Check versions:

docker version
docker info
uname -a

Install a few tools (optional but strongly recommended):

sudo apt-get update
sudo apt-get install -y iproute2 iptables tcpdump dnsutils net-tools

Notes:

  • On newer distros, Docker may use nftables under the hood; iptables -L still often works via compatibility layers.
  • If you’re on macOS/Windows Docker Desktop, networking behaves differently because containers run inside a VM. Bridge/host/macvlan semantics won’t match Linux exactly. Overlay still works in Swarm, but with extra layers.

Mental Model: Namespaces, veth pairs, and Linux bridges

Understanding Docker networking starts with Linux networking primitives:

A key idea: bridge networking is L2 inside the host, then typically L3/NAT to the outside world. Host networking removes the namespace boundary. Macvlan puts containers directly on your LAN at L2. Overlay creates a virtual L2/L3 network across multiple hosts, typically via VXLAN encapsulation.


Inspecting Docker Networking Like a Pro

List Docker networks:

docker network ls

Inspect a network:

docker network inspect bridge

Inspect a container’s network settings:

docker run -d --name ntest nginx:alpine
docker inspect ntest --format '{{json .NetworkSettings}}' | jq .

Inside a container, check interfaces and routes:

docker exec -it ntest sh -c "ip a; echo '---'; ip route; echo '---'; cat /etc/resolv.conf"

On the host, see Docker-created links and bridges:

ip link show
ip addr show
ip route show

See iptables rules Docker installs (classic view):

sudo iptables -t nat -S
sudo iptables -S

Capture traffic (example: see DNS queries from a container):

sudo tcpdump -i any -n port 53

Bridge Networks (default and user-defined)

Bridge networking is the most common mode for single-host Docker.

Default bridge vs user-defined bridge

Docker creates a default bridge network named bridge backed by a Linux bridge interface docker0.

Create a user-defined bridge:

docker network create mybr
docker network inspect mybr

Run two containers on it:

docker run -d --name web --network mybr nginx:alpine
docker run -it --rm --name client --network mybr alpine:3.20 sh

Inside client, test DNS and connectivity:

apk add --no-cache curl bind-tools
nslookup web
curl -I http://web

You should see web resolve to an IP on the mybr subnet.

Port publishing and NAT

Bridge networks usually require port publishing to accept inbound traffic from outside the host.

Run nginx and publish port 8080 on the host:

docker run -d --name pubweb --network mybr -p 8080:80 nginx:alpine
curl -I http://127.0.0.1:8080

What happens under the hood:

Look at NAT rules:

sudo iptables -t nat -S | sed -n '1,200p'

You’ll typically find chains like DOCKER and rules referencing the container IP.

DNS and service discovery on user-defined bridge

Docker runs an embedded DNS server for user-defined networks (commonly at 127.0.0.11 inside containers). This enables:

Example with aliases:

docker run -d --name api1 --network mybr --network-alias api hashicorp/http-echo -text="api1"
docker run -d --name api2 --network mybr --network-alias api hashicorp/http-echo -text="api2"
docker run -it --rm --network mybr alpine:3.20 sh -c 'apk add --no-cache curl bind-tools; for i in $(seq 1 6); do curl -s http://api:5678; echo; done'

You may see responses alternate depending on DNS caching behavior.

Tuning bridge networks

Create a bridge with a custom subnet and gateway:

docker network create \
  --driver bridge \
  --subnet 172.28.0.0/16 \
  --gateway 172.28.0.1 \
  mybr2

Attach a container and check its IP:

docker run -it --rm --network mybr2 alpine:3.20 sh -c "ip -4 addr show eth0; ip route"

You can also control whether containers can reach the outside world (via --internal):

docker network create --internal isolatedbr
docker run -it --rm --network isolatedbr alpine:3.20 sh -c "ip route; ping -c 1 1.1.1.1 || true"

An internal network prevents external routing/NAT by design.

Common pitfalls

  1. “I can’t reach container IP from another machine”
    Container IPs on bridge networks are usually private to the host. Use -p to publish ports, or use macvlan/overlay depending on your needs.

  2. Port conflicts
    -p 80:80 fails if the host already uses port 80. Use a different host port or host networking carefully.

  3. Hairpin NAT / accessing published ports from the same host
    Usually works, but can be affected by firewall rules and distro defaults.


Host Networking

What it really means

Host networking means the container does not get its own network namespace. It shares the host’s network stack:

Run a container with host networking:

docker run -d --name hostweb --network host nginx:alpine

Now nginx is listening on the host network. Test:

curl -I http://127.0.0.1:80

If port 80 is already in use on the host, the container will fail (or nginx will fail to start).

Pros/cons and security implications

Pros

Cons

Security note: host networking increases blast radius. Combine with least-privilege settings (drop capabilities, read-only FS, etc.) when possible.

Validation commands

Compare network namespaces:

docker run -d --name bridged --network bridge nginx:alpine
docker run -d --name hosted --network host nginx:alpine

# On the host, list listening ports:
sudo ss -lntp | sed -n '1,120p'

For the bridged container with -p, you’d see Docker proxying/NAT behavior. For host mode, nginx binds directly.


Macvlan Networking

Why macvlan exists

Macvlan lets containers appear as first-class devices on your physical LAN:

This is useful for:

Modes and the “host can’t talk to container” issue

Macvlan has multiple modes; the most common is bridge mode. A classic gotcha:

You can work around it by creating a macvlan sub-interface on the host and routing through it (shown below).

Creating a macvlan network (bridge mode)

You need:

Example (adjust to your environment):

Create the macvlan network:

docker network create -d macvlan \
  --subnet=192.168.10.0/24 \
  --gateway=192.168.10.1 \
  --ip-range=192.168.10.200/29 \
  -o parent=eth0 \
  maclan0

Run a container on it:

docker run -d --name lanweb --network maclan0 nginx:alpine
docker inspect lanweb --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

From another machine on the same LAN, you should be able to:

curl -I http://192.168.10.200

No -p required because the container is directly on the LAN.

Fixing host-to-macvlan-container connectivity

If the host cannot reach lanweb by its macvlan IP, add a macvlan interface on the host:

  1. Create a macvlan interface linked to the same parent:
sudo ip link add macvlan-host link eth0 type macvlan mode bridge
  1. Assign an IP in the macvlan subnet (choose an unused IP in your reserved range or outside DHCP):
sudo ip addr add 192.168.10.250/24 dev macvlan-host
sudo ip link set macvlan-host up
  1. Add a route (often not strictly necessary if the IP is on-link, but explicit routing can help clarity):
sudo ip route add 192.168.10.200/29 dev macvlan-host

Now test from the host:

curl -I http://192.168.10.200
ping -c 2 192.168.10.200

If it still fails, check:

Operational considerations


Overlay Networking (Swarm)

Overlay networking is designed for multi-host container networking. It creates a virtual network spanning multiple Docker hosts, allowing containers/services on different nodes to communicate as if on the same subnet.

Overlay networks are most commonly used with Docker Swarm (built into Docker Engine). Kubernetes uses different networking (CNI plugins), but the underlying concepts (encapsulation, service discovery) are similar.

How overlay works (VXLAN, encryption, control plane)

Key concepts:

Ports typically involved (ensure firewalls allow them between nodes):

Creating an overlay network and services

Initialize Swarm (on the first node):

docker swarm init

Create an overlay network:

docker network create -d overlay --attachable ovl0
docker network ls
docker network inspect ovl0

Deploy a service attached to the overlay:

docker service create --name web --network ovl0 --replicas 3 nginx:alpine
docker service ls
docker service ps web

Create a client container and test DNS/service access (attachable allows standalone containers to join):

docker run -it --rm --network ovl0 alpine:3.20 sh -c "apk add --no-cache curl bind-tools; nslookup web; curl -I http://web"

Even if tasks are on different nodes, web resolves and routes correctly.

Routing mesh vs VIP vs DNSRR

Swarm has multiple traffic patterns:

  1. Routing mesh (ingress network)
    When you publish ports on a service, Swarm can expose it on every node and route to active tasks.

Example:

docker service create --name pubweb --replicas 2 -p 8080:80 nginx:alpine

Now http://ANY_NODE_IP:8080 should reach the service, even if the task isn’t on that node.

  1. VIP-based internal load balancing (default)
    Services get a virtual IP (VIP). DNS resolves service name to VIP; IPVS load balances to tasks.

Inspect endpoint mode:

docker service inspect web --format '{{json .Endpoint.Spec}}' | jq .
  1. DNS round-robin (DNSRR)
    Instead of a VIP, DNS returns task IPs directly. Useful when you want client-side load balancing or need source IP preservation in some cases.

Create DNSRR service:

docker service create --name api --network ovl0 \
  --endpoint-mode dnsrr \
  --replicas 3 \
  hashicorp/http-echo -text="hello"

Query DNS:

docker run -it --rm --network ovl0 alpine:3.20 sh -c "apk add --no-cache bind-tools; for i in $(seq 1 5); do dig +short api; done"

Troubleshooting overlay

Check Swarm status:

docker node ls
docker info | sed -n '1,120p'

Inspect networks and tasks:

docker network inspect ovl0 | jq '.[0].Peers, .[0].Containers'
docker service ps web

Look for blocked ports between nodes:

# On each node, verify listening ports:
sudo ss -lntup | egrep '2377|7946|4789' || true

Capture VXLAN traffic (between nodes):

sudo tcpdump -i any -n udp port 4789

If encryption is enabled:

docker network create -d overlay --opt encrypted --attachable ovlenc

Expect lower throughput and higher CPU usage—measure if performance matters.


Performance, Isolation, and Use-Case Matrix

DriverScopeInbound from LAN/InternetName discoveryIsolationPerformanceTypical use
bridgesingle-hostvia -p (NAT)strong on user-definedgoodgoodmost single-host apps
hostsingle-hostdirect on host portsN/A (host net)lowbesthigh-performance, monitoring, special cases
macvlansingle-host (LAN)direct (no NAT)limited to Docker DNS within networkgood between containers; host isolation caveatvery goodcontainers as “real” LAN hosts
overlaymulti-hostvia routing mesh or LBstrong (Swarm)goodgood (overhead from encapsulation)multi-node services

Rules of thumb:


Hands-on Mini Labs

These labs reinforce the differences with observable behavior.

Lab 1: Observe veth + bridge behavior (bridge network)

  1. Create network and container:
docker network create labbr
docker run -d --name labng --network labbr nginx:alpine
  1. On host, find the container interface mapping:
docker exec -it labng sh -c "ip link show eth0; ip -4 addr show eth0"
  1. Inspect host-side veth:
# List veth devices (names vary):
ip link show | grep -E 'veth|docker'
  1. Inspect the Linux bridge:
# docker0 is for default bridge; user-defined networks create br-<id>
ip link show type bridge
bridge link
bridge fdb show | head

You should see the veth attached to a br-... interface.

Lab 2: Prove host networking shares ports

  1. Run a service in host mode:
docker run -d --name hosthttp --network host nginx:alpine
  1. Confirm nginx listens on the host:
sudo ss -lntp | grep ':80' || true
curl -I http://127.0.0.1
  1. Try running a second nginx host-mode container:
docker run -d --name hosthttp2 --network host nginx:alpine
docker logs hosthttp2 | tail -n 50

The second one typically fails because port 80 is already in use.

Lab 3: Macvlan direct LAN reachability

  1. Create macvlan network (adjust values):
docker network create -d macvlan \
  --subnet=192.168.10.0/24 \
  --gateway=192.168.10.1 \
  --ip-range=192.168.10.200/29 \
  -o parent=eth0 \
  labmac
  1. Run a container:
docker run -d --name macng --network labmac nginx:alpine
MACIP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' macng)
echo "$MACIP"
  1. From another LAN machine:
curl -I "http://$MACIP"
  1. If host can’t reach it, add host macvlan interface (as shown earlier) and retry.

Lab 4: Overlay across nodes (Swarm)

On manager:

docker swarm init
docker network create -d overlay --attachable labovl
docker service create --name who --network labovl --replicas 3 hashicorp/http-echo -text="whoami"

On any node (manager or worker), run a client:

docker run -it --rm --network labovl alpine:3.20 sh -c "apk add --no-cache curl; for i in $(seq 1 5); do curl -s who:5678; echo; done"

Add a published port via routing mesh:

docker service update --publish-add 9090:5678 who
curl -s http://127.0.0.1:9090

Try from another node’s IP as well; routing mesh should forward.


Cleanup

Remove containers/services/networks created in this tutorial.

docker rm -f ntest pubweb web api1 api2 bridged hosted hostweb hosthttp hosthttp2 labng lanweb macng 2>/dev/null || true
docker service rm web pubweb api who 2>/dev/null || true
docker network rm mybr mybr2 isolatedbr labbr maclan0 labmac ovl0 ovlenc labovl 2>/dev/null || true

If you initialized Swarm and want to leave it:

docker swarm leave --force

If you created a host macvlan interface:

sudo ip link del macvlan-host 2>/dev/null || true

Final Guidance: Choosing the Right Driver

If you share your environment (single host vs multi-host, LAN constraints, firewall rules, whether you need inbound access without port publishing), I can propose an exact network design and command set tailored to it.