← Back to Tutorials

Fix Nginx 502 Bad Gateway When Using Docker (DevOps Guide)

nginxdocker502-bad-gatewayreverse-proxydevopscontainer-networkingtroubleshootingupstream

Fix Nginx 502 Bad Gateway When Using Docker (DevOps Guide)

A 502 Bad Gateway from Nginx in a Docker setup almost always means:

In containerized environments, the upstream is often referenced by container DNS name, service name, container IP, or host networking. A small mismatch—wrong port, wrong network, wrong DNS name, wrong protocol, or an app that isn’t ready—can trigger 502.

This guide provides a practical, DevOps-style workflow to diagnose and fix 502s when Nginx is running with Docker (Docker Compose or plain Docker). It includes real commands, deep explanations, and common failure patterns.


Table of Contents


1. Understand what “502 Bad Gateway” means in Nginx

Nginx acts as a reverse proxy when you configure something like:

location / {
  proxy_pass http://app:3000;
}

In that case:

  1. Client connects to Nginx.
  2. Nginx connects to the upstream (app:3000).
  3. If the upstream connection fails or returns an invalid response, Nginx returns 502.

Common upstream failure modes that map to 502

Nginx may also return 504 Gateway Timeout for timeouts, but depending on exact conditions you may see 502.


2. Start with evidence: Nginx logs and upstream errors

View Nginx container logs

If Nginx runs in Docker:

docker logs -f nginx

Or in Compose:

docker compose logs -f nginx

Check Nginx error log inside the container

Many images log to stdout/stderr, but some still write files. Enter the container:

docker exec -it nginx sh

Then:

nginx -T | sed -n '1,200p'
ls -la /var/log/nginx || true
tail -n 200 /var/log/nginx/error.log 2>/dev/null || true

What to look for in error logs

Examples you might see:

These messages tell you exactly which class of fix to apply.


3. Verify containers, ports, and health

Before touching Nginx config, confirm the upstream container is actually running and listening on the expected port.

List containers and status

docker ps

If using Compose:

docker compose ps

If the upstream container is restarting or exited, you already found the cause.

Inspect container port bindings (host vs container)

docker port app

Or:

docker inspect app --format '{{json .NetworkSettings.Ports}}' | jq

Important: Nginx inside Docker does not use host-published ports to reach another container. It uses the container’s internal port on the Docker network.

Example:

Confirm the app is listening inside its container

docker exec -it app sh -lc 'ss -lntp || netstat -lntp'

Look for something like LISTEN 0 4096 0.0.0.0:3000.

If it’s bound to 127.0.0.1:3000 only, other containers cannot reach it. Fix by binding to 0.0.0.0.

For Node.js:

# Ensure your server listens on 0.0.0.0
app.listen(3000, '0.0.0.0');

For many frameworks, you set an env var, e.g.:

HOST=0.0.0.0 PORT=3000 npm start

4. Docker networking basics that cause 502

Key concept: Docker DNS works per network

In Docker Compose, services are attached to a project network by default, and service names become DNS records.

If Nginx and the app are not on the same network, Nginx cannot resolve app or reach it.

Check networks:

docker network ls
docker network inspect <network_name> | jq '.[0].Containers | keys'

Inspect a container’s networks:

docker inspect nginx --format '{{json .NetworkSettings.Networks}}' | jq
docker inspect app --format '{{json .NetworkSettings.Networks}}' | jq

They must share at least one network.

Container-to-container connectivity test

From inside the Nginx container:

docker exec -it nginx sh -lc 'apk add --no-cache curl 2>/dev/null || true; curl -v http://app:3000/ || true'

If this fails, Nginx will fail too.

If curl is not available and you don’t want to install packages, use a temporary debug container on the same network:

docker run --rm -it --network <network_name> curlimages/curl:8.6.0 -v http://app:3000/

5. Fix: wrong upstream host/port (most common)

Symptom

Nginx error log shows:

Root causes

Correct pattern in Compose

If your Compose service is:

docker compose ps

And you see:

Then Nginx should use:

proxy_pass http://app:3000;

Not:

Verify with nginx -T

Inside Nginx container:

docker exec -it nginx sh -lc 'nginx -T 2>/dev/null | sed -n "1,200p"'

Search for proxy_pass:

docker exec -it nginx sh -lc 'nginx -T 2>/dev/null | grep -R "proxy_pass" -n || true'

Reload Nginx safely

After editing config:

docker exec -it nginx nginx -t
docker exec -it nginx nginx -s reload

If you’re rebuilding an image, rebuild and restart:

docker compose up -d --build

6. Fix: Nginx points to localhost inside a container

Symptom

Nginx config contains:

proxy_pass http://localhost:3000;

Or:

proxy_pass http://127.0.0.1:3000;

Why this breaks in Docker

Inside the Nginx container, localhost refers to the Nginx container itself, not your app container. Unless the app runs in the same container (rare in good practice), Nginx will connect to nothing and return 502.

Fix

Use the service/container name on the shared Docker network:

proxy_pass http://app:3000;

If you truly need to reach a process on the Docker host (not recommended for typical Compose stacks), use:

Example:

proxy_pass http://host.docker.internal:3000;

But prefer container-to-container networking.


7. Fix: upstream not ready (race condition) and health checks

Symptom

Why it happens

depends_on in Docker Compose controls start order, not readiness. The app container may be “running” but not listening yet (migrations, warmup, dependency connection attempts).

Add healthchecks and wait for healthy upstream

In docker-compose.yml (conceptually; you’ll still apply it in your Compose file), add a healthcheck to the app container. Example for an HTTP app:

# Example command you can run manually to test:
curl -fsS http://localhost:3000/health

Healthcheck command inside container:

docker exec -it app sh -lc 'curl -fsS http://localhost:3000/health'

Then configure Compose to wait for health (Compose v2 supports condition: service_healthy in some versions; if unavailable, use an entrypoint script or a lightweight wait tool).

A common approach: Nginx starts, but you configure Nginx to retry upstream connections (it already will for new requests). For zero-downtime you typically put a load balancer with health checks in front, but for small stacks healthchecks are enough.

Increase upstream fail tolerance (practical tuning)

In Nginx:

proxy_connect_timeout 5s;
proxy_read_timeout 60s;
proxy_next_upstream error timeout http_502 http_503 http_504;
proxy_next_upstream_tries 10;

This doesn’t fix a permanently broken upstream, but smooths transient startup issues.


8. Fix: protocol mismatch (HTTP vs HTTPS) and TLS upstreams

Symptom

Nginx error log includes:

Root causes

Fix: match the scheme

If upstream is plain HTTP:

proxy_pass http://app:3000;

If upstream is HTTPS:

proxy_pass https://app:3443;

And configure SNI and verification as needed:

proxy_ssl_server_name on;
proxy_ssl_name app;  # or the certificate's DNS name

# For self-signed certs (dev only):
proxy_ssl_verify off;

For production, mount the upstream CA and verify:

proxy_ssl_trusted_certificate /etc/nginx/certs/upstream-ca.pem;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;

Test from inside Nginx container:

docker exec -it nginx sh -lc 'apk add --no-cache openssl curl 2>/dev/null || true; curl -vk https://app:3443/'

9. Fix: wrong path, redirects, and proxy_pass subtleties

Nginx proxy_pass has path rules that can silently break upstream routing.

Two common forms

A) Without trailing slash:

location /api {
  proxy_pass http://app:3000;
}

Request: /api/users → upstream gets /api/users

B) With trailing slash:

location /api/ {
  proxy_pass http://app:3000/;
}

Request: /api/users → upstream gets /users

If your upstream expects /api/... but you accidentally strip it (or vice versa), the app may return errors or close connections unexpectedly, sometimes surfacing as 502 depending on app behavior.

Debug by logging upstream URI

Enable more detailed access logs (example):

log_format upstreamlog '$remote_addr - $host "$request" '
                      'upstream=$upstream_addr status=$status '
                      'ustatus=$upstream_status urt=$upstream_response_time '
                      'rt=$request_time';

access_log /var/log/nginx/access.log upstreamlog;

Then:

docker exec -it nginx tail -f /var/log/nginx/access.log

Fix redirects with proper headers

Apps often generate redirects based on Host and scheme. Ensure you pass:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

Without X-Forwarded-Proto, an app behind TLS-terminating Nginx might think the request is HTTP and redirect incorrectly.


10. Fix: upstream closes connection / timeouts / buffering

Symptom

Nginx error log:

Root causes

Fix: tune timeouts

In the location block:

proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
send_timeout 60s;

For long-running requests (file generation, reports):

proxy_read_timeout 300s;

Fix: buffering for large responses

For streaming or large payloads you might need:

proxy_buffering on;
proxy_buffers 16 64k;
proxy_busy_buffers_size 128k;

Or for streaming (SSE, chunked responses), disable buffering:

proxy_buffering off;

Confirm upstream stability

Check upstream container logs:

docker logs -f app

If you see out-of-memory kills, you’ll get random 502s. Verify:

docker stats

And check kernel OOM messages on the host:

dmesg -T | grep -i -E 'killed process|oom'

11. Fix: WebSockets and streaming responses

If your app uses WebSockets (Socket.IO, GraphQL subscriptions, etc.), missing headers can cause handshake failures that appear as 502/504.

Use:

location /socket/ {
  proxy_pass http://app:3000;

  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";

  proxy_set_header Host $host;
  proxy_read_timeout 3600s;
}

Test WebSocket connectivity from a client or use a tool like wscat (run in a temporary container):

docker run --rm -it node:20-alpine sh -lc 'npm i -g wscat && wscat -c ws://nginx/socket/'

(Replace nginx with the correct reachable hostname from that container/network.)


12. Fix: Unix sockets in Docker

Some stacks use Unix sockets for upstream (e.g., Gunicorn, uWSGI). In Docker, this works only if:

Example Nginx:

upstream django {
  server unix:/run/gunicorn/gunicorn.sock;
}

server {
  listen 80;
  location / {
    proxy_pass http://django;
  }
}

You must mount /run/gunicorn into both containers. If not, Nginx will log:

Check inside Nginx:

docker exec -it nginx ls -la /run/gunicorn

If the socket exists but permission denied:

docker exec -it nginx sh -lc 'id; ls -la /run/gunicorn/gunicorn.sock'

Fix by aligning users/groups or chmod/chown in the app container startup.


13. Fix: permissions, SELinux, and bind mounts

On SELinux-enabled hosts (Fedora/RHEL/CentOS), bind mounts can cause Nginx to fail reading configs/certs or connecting to sockets, leading to upstream failures.

Symptom

Fix: use SELinux mount labels

When mounting volumes, add :z or :Z (depending on sharing needs). Example:

docker run -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro,Z nginx:alpine

For Compose, you’d apply similar volume options.

Also verify file permissions:

ls -la nginx.conf

And inside container:

docker exec -it nginx ls -la /etc/nginx/nginx.conf

14. A solid reference Docker Compose + Nginx config

Below is a reference pattern that avoids common 502 causes: correct networking, correct DNS name, correct headers, and decent timeouts. Adapt names/ports to your app.

Example: run app + nginx on the same Docker network

Create a network (optional with Compose; useful with plain Docker):

docker network create webnet

Run an app (example using a simple HTTP echo server):

docker run -d --name app --network webnet -p 3000:3000 \
  ealen/echo-server:latest

Run Nginx with a mounted config:

cat > default.conf <<'EOF'
server {
  listen 80;
  server_name _;

  access_log /var/log/nginx/access.log;
  error_log  /var/log/nginx/error.log warn;

  location / {
    proxy_pass http://app:80;

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    proxy_connect_timeout 5s;
    proxy_send_timeout 60s;
    proxy_read_timeout 60s;
  }
}
EOF

Start Nginx:

docker run -d --name nginx --network webnet -p 8080:80 \
  -v "$PWD/default.conf:/etc/nginx/conf.d/default.conf:ro" \
  nginx:1.27-alpine

Test:

curl -v http://localhost:8080/

Why this works

If you change the upstream port incorrectly, you’ll reproduce a 502 immediately—useful for validating your debugging approach.


15. A repeatable troubleshooting checklist

Use this sequence to solve most Docker + Nginx 502s quickly.

Step 1: Identify the upstream Nginx is trying to reach

docker exec -it nginx sh -lc 'nginx -T 2>/dev/null | grep -n "proxy_pass" -n || true'

Also check upstream {} blocks.

Step 2: Read the Nginx error log around the failure

docker logs --tail 200 nginx

Or:

docker exec -it nginx sh -lc 'tail -n 200 /var/log/nginx/error.log 2>/dev/null || true'

Step 3: Confirm upstream container is running and stable

docker ps
docker logs --tail 200 app
docker stats --no-stream

Step 4: Confirm Nginx and upstream share a network

docker inspect nginx --format '{{json .NetworkSettings.Networks}}' | jq
docker inspect app --format '{{json .NetworkSettings.Networks}}' | jq

Step 5: Test connectivity from Nginx container to upstream

docker exec -it nginx sh -lc 'apk add --no-cache curl 2>/dev/null || true; curl -v http://app:3000/ || true'

If DNS fails:

If connection refused:

Step 6: Validate protocol (HTTP vs HTTPS)

docker exec -it nginx sh -lc 'apk add --no-cache curl 2>/dev/null || true; curl -v http://app:3000/ || true'
docker exec -it nginx sh -lc 'apk add --no-cache curl 2>/dev/null || true; curl -vk https://app:3443/ || true'

Use the one that matches your upstream.

Step 7: Fix config, test, reload

docker exec -it nginx nginx -t
docker exec -it nginx nginx -s reload

Step 8: Re-test end-to-end

curl -v http://localhost:8080/

Closing notes: what “good” looks like in production

To prevent recurring 502s in real environments:

If you share your Nginx config (server { ... }) and the relevant Docker/Compose commands you use to run the containers, you can pinpoint the exact cause by matching the error log message to one of the patterns above.