← Back to Tutorials

Fix Nginx Reverse Proxy Issues in Docker: 502/504 Errors, Timeouts & SSL Gotchas

nginxdockerreverse-proxy502-bad-gateway504-gateway-timeoutssl-tlsdevopstroubleshooting

Fix Nginx Reverse Proxy Issues in Docker: 502/504 Errors, Timeouts & SSL Gotchas

Running Nginx as a reverse proxy in Docker is common—and so are the frustrating failures: 502 Bad Gateway, 504 Gateway Timeout, random disconnects, WebSocket breakage, and SSL/TLS loops. This tutorial is a practical, command-heavy guide to diagnosing and fixing these problems with deep explanations and real Docker + Nginx configurations.


Table of Contents


1. Mental model: what “reverse proxy in Docker” really means

A reverse proxy sits in front of one or more upstream services:

Client (browser) -> Nginx (reverse proxy) -> Upstream app (API, web, etc.)

When Docker is involved, there are extra layers:

Most 502/504 issues come from one of these mismatches:


2. Quick triage: distinguish 502 vs 504 vs SSL failures

502 Bad Gateway (from Nginx)

Usually means Nginx could not successfully talk to the upstream. Common reasons:

You’ll often see in Nginx error log:

504 Gateway Timeout (from Nginx)

Nginx could connect to upstream but did not receive a response in time.

Typical log lines:

This is almost always a timeout/buffering/slow-upstream issue, not a DNS issue.

SSL/TLS failures

Symptoms include:


3. Collect evidence: logs and live debugging commands

Before changing configs, capture facts.

3.1 Check container status and restarts

docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}'
docker compose ps
docker inspect -f '{{.Name}} restart={{.RestartCount}} state={{.State.Status}} health={{.State.Health.Status}}' $(docker ps -q)

If the upstream is restarting or unhealthy, Nginx will throw intermittent 502/504.

3.2 Tail Nginx logs (access + error)

If you run Nginx as a container:

docker logs -f nginx

If your image logs to files (common in custom images), exec in:

docker exec -it nginx sh
# then:
tail -n 200 /var/log/nginx/error.log
tail -n 200 /var/log/nginx/access.log

3.3 Inspect upstream logs

docker logs -f app

Look for crashes, slow queries, OOM, “listening on 127.0.0.1”, etc.

3.4 Test connectivity from inside the Nginx container

This is the fastest way to isolate “Docker networking vs Nginx config vs upstream”.

docker exec -it nginx sh
# install tools if needed (depends on base image)
# Alpine:
apk add --no-cache curl bind-tools busybox-extras
# Debian/Ubuntu:
apt-get update && apt-get install -y curl dnsutils iputils-ping netcat-traditional

# DNS resolution:
getent hosts app || nslookup app

# TCP reachability:
nc -vz app 8080

# HTTP response:
curl -v http://app:8080/health
curl -v http://app:8080/

If nc fails, Nginx won’t work either. Fix networking/addressing first.

3.5 Validate what Nginx is actually running

docker exec -it nginx nginx -T | sed -n '1,200p'

nginx -T dumps the complete config including included files—critical for catching a wrong proxy_pass or a conflicting server block.


4. The most common root causes (and fixes)

4.1 Upstream container not reachable (network/DNS)

Symptom: Nginx error log shows host not found in upstream "app" or no resolver defined.

Why it happens:

Fixes:

Ensure both services share a network (Compose)

Example:

docker compose config

Confirm nginx and app are on the same network.

If not, in docker-compose.yml (example shown later), put them on the same network.

Use service name + container port

In Compose, prefer:

If you must resolve dynamically, set a resolver

For variable upstreams or DNS changes, add:

resolver 127.0.0.11 valid=30s ipv6=off;

127.0.0.11 is Docker’s embedded DNS server in bridge networks.

Note: If you use a plain proxy_pass http://app:8080; without variables, Nginx typically resolves at startup and keeps it. That’s fine for stable Compose service names, but can be problematic with changing DNS records.


4.2 Wrong upstream address: localhost and host networking confusion

Symptom: 502 with connect() failed (111: Connection refused) and upstream shown as 127.0.0.1:... or localhost:....

Why it happens: Inside the Nginx container:

Fix options:

proxy_pass http://app:8080;

Option B: reach a service running on the Docker host

If the upstream is on the host (not in Docker), use:

# shown here as a concept; tutorial remains Markdown-only

Command-line equivalent for Linux testing:

docker run --rm -it --add-host=host.docker.internal:host-gateway alpine sh

Then in Nginx:

proxy_pass http://host.docker.internal:8080;

4.3 Upstream not listening where you think it is (bind address)

Symptom: From inside Nginx container, nc -vz app 8080 fails, but app logs say “listening on 127.0.0.1:8080”.

Why it happens: The app is bound to loopback inside its own container, so other containers can’t connect.

Fix: bind to 0.0.0.0 Examples:

Then re-test:

docker exec -it nginx nc -vz app 8080
curl -v http://app:8080/

4.4 Nginx starts before upstream is ready (startup race)

Symptom: Nginx returns 502 for the first seconds/minutes after deployment, then “magically” works.

Why it happens:

Fixes:

Add healthchecks and gate traffic

If you can, add a health endpoint and a Compose healthcheck. Then configure Nginx to retry upstreams (limited) and/or ensure your orchestrator doesn’t route traffic until healthy.

At minimum, add robust proxy timeouts and consider proxy_next_upstream behavior (see later).

Use an upstream block with multiple servers (if applicable)

If you have replicas, Nginx can fail over.


4.5 Timeouts: slow upstreams, large uploads, streaming responses

Symptom: 504 Gateway Timeout, especially on long requests (file uploads, report generation, slow DB queries).

Key Nginx timeouts to understand:

Important nuance: proxy_read_timeout is not “total request time”. It’s the maximum time between two successive reads from upstream. If your upstream sends nothing for 60s and the timeout is 60s, Nginx will abort.

Fix (common baseline):

location / {
  proxy_connect_timeout 5s;
  proxy_send_timeout 60s;
  proxy_read_timeout 300s;
  send_timeout 300s;
  proxy_pass http://app:8080;
}

For extremely long operations, consider:

Large uploads: client body limits

If uploads fail with 413 Request Entity Too Large or appear as 502/504 due to upstream behavior, set:

client_max_body_size 100m;
client_body_timeout 300s;

Also ensure the upstream supports that size and has its own limits.


4.6 Buffering and “upstream sent too big header”

Symptom: 502 with error log like:

Why it happens: Upstream response headers (often cookies) exceed Nginx buffer sizes.

Fix: increase proxy buffers (carefully)

proxy_buffer_size 16k;
proxy_buffers 8 32k;
proxy_busy_buffers_size 64k;

If you’re proxying to apps that set large cookies (auth tokens, session data), fix the app too—giant cookies are a performance and reliability problem.

Buffering and streaming

If you proxy Server-Sent Events (SSE) or streaming responses, buffering can break real-time behavior. Use:

proxy_buffering off;
proxy_cache off;

4.7 WebSockets and HTTP/1.1 upgrade issues

Symptom: WebSockets connect then immediately close, or never upgrade; Nginx shows 400/502, app shows missing upgrade headers.

Why it happens: WebSockets require:

Fix:

map $http_upgrade $connection_upgrade {
  default upgrade;
  '' close;
}

location /ws/ {
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection $connection_upgrade;

  proxy_set_header Host $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Proto $scheme;

  proxy_read_timeout 3600s;
  proxy_pass http://app:8080;
}

Test with:

curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" http://localhost/ws/

Or use wscat from Node.js tooling.


4.8 TLS/SSL gotchas: termination, re-encryption, redirect loops

SSL issues in reverse proxy setups usually come from mismatched expectations: who terminates TLS, what scheme the app thinks it’s on, and whether redirects are correct.

Scenario A: TLS terminates at Nginx (most common)

Client -> HTTPS -> Nginx -> HTTP -> app

In this case:

Nginx snippet:

server {
  listen 443 ssl http2;
  server_name example.com;

  ssl_certificate     /etc/nginx/certs/fullchain.pem;
  ssl_certificate_key /etc/nginx/certs/privkey.pem;

  location / {
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://app:8080;
  }
}

Redirect loop symptom: browser loops between http/https or keeps redirecting to https behind https.

Cause: app doesn’t trust X-Forwarded-Proto and thinks it’s on HTTP, so it redirects to HTTPS; but Nginx is already on HTTPS and forwards HTTP to app again—loop.

Fix: configure the app/framework to trust proxy headers.

Examples:

Scenario B: TLS passthrough (Nginx not terminating)

If Nginx is doing TCP stream proxying (not typical for standard Nginx HTTP reverse proxy), configuration differs. Many “wrong version number” errors happen when you accidentally do:

Fix: match scheme to upstream reality:

Scenario C: re-encrypt to upstream HTTPS

Client -> HTTPS -> Nginx -> HTTPS -> app

This is valid, but you must handle upstream certificates:

location / {
  proxy_pass https://app:8443;

  proxy_ssl_server_name on;
  proxy_ssl_name app; # or the upstream cert's name

  # If upstream uses a private CA, mount CA cert and set:
  proxy_ssl_trusted_certificate /etc/nginx/ca/ca.pem;
  proxy_ssl_verify on;
  proxy_ssl_verify_depth 2;
}

If you don’t want verification (not recommended), you’ll see handshake failures; you can temporarily test with:

proxy_ssl_verify off;

4.9 Wrong Host / X-Forwarded-* headers (apps generating bad URLs)

Symptom:

Why it happens: Apps often use Host and scheme to generate URLs. If you don’t forward them correctly, the app sees internal names.

Fix baseline headers:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;

Then configure your app to trust these headers.


4.10 IPv6 pitfalls inside containers

Symptom: Nginx tries IPv6 first and fails, or upstream resolves to IPv6 but your network doesn’t support it.

Fix: disable IPv6 in resolver or listen explicitly If using Docker DNS resolver:

resolver 127.0.0.11 ipv6=off;

Also consider:

listen 80;
# avoid: listen [::]:80; unless you need it and it works

5. Reference Docker Compose setup (battle-tested)

Below is a practical Compose example with:

Create a directory structure:

mkdir -p reverse-proxy/{nginx,certs}
cd reverse-proxy
touch docker-compose.yml
touch nginx/nginx.conf

Example docker-compose.yml:

services:
  nginx:
    image: nginx:1.25-alpine
    container_name: nginx
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/etc/nginx/certs:ro
    depends_on:
      app:
        condition: service_healthy
    networks:
      - web

  app:
    image: node:20-alpine
    container_name: app
    working_dir: /app
    command: sh -c "node server.js"
    volumes:
      - ./app:/app
    expose:
      - "8080"
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:8080/health"]
      interval: 5s
      timeout: 2s
      retries: 20
    networks:
      - web

networks:
  web:
    driver: bridge

Create a minimal upstream app for testing (optional but useful):

mkdir -p app
cat > app/server.js <<'EOF'
const http = require('http');

const server = http.createServer((req, res) => {
  if (req.url === '/health') {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    return res.end('ok');
  }
  // Simulate slow endpoint
  if (req.url.startsWith('/slow')) {
    setTimeout(() => {
      res.writeHead(200, {'Content-Type': 'text/plain'});
      res.end('slow response done\n');
    }, 15000);
    return;
  }
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('hello from app\n');
});

server.listen(8080, '0.0.0.0', () => {
  console.log('listening on 0.0.0.0:8080');
});
EOF

Bring it up:

docker compose up -d
docker compose logs -f --tail=200

6. A hardened Nginx reverse proxy config for Docker

This config includes:

Create nginx/nginx.conf:

worker_processes auto;

events {
  worker_connections 1024;
}

http {
  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;

  log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for" '
                  'rt=$request_time uct=$upstream_connect_time '
                  'uht=$upstream_header_time urt=$upstream_response_time';

  access_log /var/log/nginx/access.log main;
  error_log  /var/log/nginx/error.log warn;

  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;

  # Docker embedded DNS (important if you use variables or want periodic re-resolve)
  resolver 127.0.0.11 valid=30s ipv6=off;

  # WebSocket upgrade mapping
  map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
  }

  # Upstream definition (optional but nice for clarity)
  upstream app_upstream {
    server app:8080;
    keepalive 32;
  }

  # HTTP server (redirect to HTTPS if you have TLS)
  server {
    listen 80;
    server_name _;

    location = /nginx-health {
      access_log off;
      return 200 "ok\n";
    }

    # If you don't want HTTPS, you can proxy here instead of redirecting.
    # return 301 https://$host$request_uri;

    location / {
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;

      proxy_http_version 1.1;
      proxy_set_header Connection "";

      proxy_connect_timeout 5s;
      proxy_send_timeout 60s;
      proxy_read_timeout 300s;
      send_timeout 300s;

      proxy_buffer_size 16k;
      proxy_buffers 8 32k;
      proxy_busy_buffers_size 64k;

      client_max_body_size 50m;

      proxy_pass http://app_upstream;
    }
  }

  # HTTPS server (enable if you have certs mounted)
  server {
    listen 443 ssl http2;
    server_name _;

    ssl_certificate     /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    # Modern baseline; adjust to your compliance needs
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;

    location = /nginx-health {
      access_log off;
      return 200 "ok\n";
    }

    location /ws/ {
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;

      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto https;

      proxy_read_timeout 3600s;
      proxy_pass http://app_upstream;
    }

    location / {
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto https;

      proxy_connect_timeout 5s;
      proxy_send_timeout 60s;
      proxy_read_timeout 300s;

      client_max_body_size 50m;

      proxy_pass http://app_upstream;
    }
  }
}

Reload Nginx after changes:

docker exec -it nginx nginx -t
docker exec -it nginx nginx -s reload

7. Step-by-step troubleshooting playbook

Use this sequence to avoid guessing.

Step 1: Confirm the upstream is healthy and listening correctly

From host:

docker logs --tail=200 app
docker exec -it app sh -c "netstat -tulpn 2>/dev/null || ss -tulpn"

Look for 0.0.0.0:8080 (good) vs 127.0.0.1:8080 (problem).

Step 2: Confirm Nginx can reach the upstream over the Docker network

docker exec -it nginx sh -c "getent hosts app && nc -vz app 8080"
docker exec -it nginx sh -c "curl -v http://app:8080/health"

If DNS fails:

If TCP fails:

Step 3: Inspect Nginx config actually loaded

docker exec -it nginx nginx -T | less

Search for:

Step 4: Read the Nginx error log around a failing request

docker exec -it nginx tail -n 200 /var/log/nginx/error.log

Map log patterns to causes:

Step 5: Reproduce with curl to separate client issues from proxy issues

From host to Nginx:

curl -v http://localhost/
curl -vk https://localhost/  # -k ignores cert validation for local testing

From Nginx to upstream (inside container):

docker exec -it nginx curl -v http://app:8080/

If upstream works internally but fails through Nginx, it’s likely headers, timeouts, buffering, or routing rules.

Step 6: Fix timeouts based on real behavior, not guesses

If /slow takes 15 seconds and you get 504 at 10 seconds, increase proxy_read_timeout above 15 seconds:

proxy_read_timeout 30s;

Then reload and retest:

docker exec -it nginx nginx -t && docker exec -it nginx nginx -s reload
curl -v http://localhost/slow

Step 7: Validate TLS chain and redirect logic

Check certificate details:

openssl s_client -connect localhost:443 -servername example.com -showcerts </dev/null

Check redirects:

curl -I http://localhost/
curl -Ik https://localhost/

If you see repeated 301/302 bouncing between http and https, fix:

Step 8: Watch for resource exhaustion (hidden cause of timeouts)

504s can be caused by upstream CPU starvation, DB locks, or OOM kills.

Check resource usage:

docker stats
docker inspect app --format '{{json .State.OOMKilled}}'

If OOMKilled is true, increase memory limits or reduce app memory usage.


8. Verification checklist

After applying fixes, verify systematically:

  1. DNS and network

    docker exec -it nginx getent hosts app
    docker exec -it nginx nc -vz app 8080
  2. Upstream health

    docker exec -it nginx curl -v http://app:8080/health
  3. Nginx config validity

    docker exec -it nginx nginx -t
    docker exec -it nginx nginx -T | head -n 50
  4. End-to-end HTTP

    curl -v http://localhost/
  5. End-to-end HTTPS (if enabled)

    curl -vk https://localhost/
  6. Timeout-sensitive endpoint

    curl -v http://localhost/slow
  7. Headers correctness (scheme/host)

    curl -v http://localhost/ -H 'Host: example.com'
  8. Log sanity

    docker exec -it nginx tail -n 50 /var/log/nginx/error.log
    docker exec -it nginx tail -n 50 /var/log/nginx/access.log

Closing notes: how to prevent 502/504s long-term

If you share your nginx.conf, docker-compose.yml, and the exact Nginx error log line for a failing request, the diagnosis can be narrowed to a specific fix quickly.