Fix Nginx Reverse Proxy Issues in Docker: 502/504 Errors, Timeouts & SSL Gotchas
Running Nginx as a reverse proxy in Docker is common—and so are the frustrating failures: 502 Bad Gateway, 504 Gateway Timeout, random disconnects, WebSocket breakage, and SSL/TLS loops. This tutorial is a practical, command-heavy guide to diagnosing and fixing these problems with deep explanations and real Docker + Nginx configurations.
Table of Contents
- 1. Mental model: what “reverse proxy in Docker” really means
- 2. Quick triage: distinguish 502 vs 504 vs SSL failures
- 3. Collect evidence: logs and live debugging commands
- 4. The most common root causes (and fixes)
- 4.1 Upstream container not reachable (network/DNS)
- 4.2 Wrong upstream address: localhost and host networking confusion
- 4.3 Upstream not listening where you think it is (bind address)
- 4.4 Nginx starts before upstream is ready (startup race)
- 4.5 Timeouts: slow upstreams, large uploads, streaming responses
- 4.6 Buffering and “upstream sent too big header”
- 4.7 WebSockets and HTTP/1.1 upgrade issues
- 4.8 TLS/SSL gotchas: termination, re-encryption, redirect loops
- 4.9 Wrong Host / X-Forwarded-* headers (apps generating bad URLs)
- 4.10 IPv6 pitfalls inside containers
- 5. Reference Docker Compose setup (battle-tested)
- 6. A hardened Nginx reverse proxy config for Docker
- 7. Step-by-step troubleshooting playbook
- 8. Verification checklist
1. Mental model: what “reverse proxy in Docker” really means
A reverse proxy sits in front of one or more upstream services:
Client (browser) -> Nginx (reverse proxy) -> Upstream app (API, web, etc.)
When Docker is involved, there are extra layers:
- Container networking: containers talk over a Docker network (bridge) using internal IPs and DNS names (service names in Compose).
- Port publishing (
-p 80:80): exposes a container port to the host. This is for outside access; containers on the same network typically don’t need published ports to talk to each other. - Name resolution: in Docker Compose,
proxycan reachappviahttp://app:PORTif both are on the same network.
Most 502/504 issues come from one of these mismatches:
- Nginx tries to reach the upstream at the wrong address/port.
- The upstream is down, restarting, not ready, or bound only to localhost.
- Timeouts/buffering limits are too low for the workload.
- TLS is terminated in the wrong place or headers mislead the app.
2. Quick triage: distinguish 502 vs 504 vs SSL failures
502 Bad Gateway (from Nginx)
Usually means Nginx could not successfully talk to the upstream. Common reasons:
- Connection refused (upstream not listening / wrong port)
- No route to host (wrong network)
- Upstream closed connection early
- DNS resolution failure inside the container
You’ll often see in Nginx error log:
connect() failed (111: Connection refused) while connecting to upstreamhost not found in upstreamupstream prematurely closed connection
504 Gateway Timeout (from Nginx)
Nginx could connect to upstream but did not receive a response in time.
Typical log lines:
upstream timed out (110: Connection timed out) while reading response header from upstream
This is almost always a timeout/buffering/slow-upstream issue, not a DNS issue.
SSL/TLS failures
Symptoms include:
- Browser shows
ERR_SSL_PROTOCOL_ERROR,NET::ERR_CERT..., or endless redirects. curlshowsSSL routines:... wrong version number(often means you spoke HTTPS to an HTTP port or vice versa).
3. Collect evidence: logs and live debugging commands
Before changing configs, capture facts.
3.1 Check container status and restarts
docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}'
docker compose ps
docker inspect -f '{{.Name}} restart={{.RestartCount}} state={{.State.Status}} health={{.State.Health.Status}}' $(docker ps -q)
If the upstream is restarting or unhealthy, Nginx will throw intermittent 502/504.
3.2 Tail Nginx logs (access + error)
If you run Nginx as a container:
docker logs -f nginx
If your image logs to files (common in custom images), exec in:
docker exec -it nginx sh
# then:
tail -n 200 /var/log/nginx/error.log
tail -n 200 /var/log/nginx/access.log
3.3 Inspect upstream logs
docker logs -f app
Look for crashes, slow queries, OOM, “listening on 127.0.0.1”, etc.
3.4 Test connectivity from inside the Nginx container
This is the fastest way to isolate “Docker networking vs Nginx config vs upstream”.
docker exec -it nginx sh
# install tools if needed (depends on base image)
# Alpine:
apk add --no-cache curl bind-tools busybox-extras
# Debian/Ubuntu:
apt-get update && apt-get install -y curl dnsutils iputils-ping netcat-traditional
# DNS resolution:
getent hosts app || nslookup app
# TCP reachability:
nc -vz app 8080
# HTTP response:
curl -v http://app:8080/health
curl -v http://app:8080/
If nc fails, Nginx won’t work either. Fix networking/addressing first.
3.5 Validate what Nginx is actually running
docker exec -it nginx nginx -T | sed -n '1,200p'
nginx -T dumps the complete config including included files—critical for catching a wrong proxy_pass or a conflicting server block.
4. The most common root causes (and fixes)
4.1 Upstream container not reachable (network/DNS)
Symptom: Nginx error log shows host not found in upstream "app" or no resolver defined.
Why it happens:
- Nginx resolves upstream hostnames at startup (depending on config). If the DNS name isn’t available yet, Nginx may fail to start or keep stale resolution.
- Containers are not on the same Docker network.
- You used a hostname that only exists on the host, not inside Docker.
Fixes:
Ensure both services share a network (Compose)
Example:
docker compose config
Confirm nginx and app are on the same network.
If not, in docker-compose.yml (example shown later), put them on the same network.
Use service name + container port
In Compose, prefer:
proxy_pass http://app:8080;(container-to-container)- Not
proxy_pass http://localhost:8080;(that points to Nginx container itself)
If you must resolve dynamically, set a resolver
For variable upstreams or DNS changes, add:
resolver 127.0.0.11 valid=30s ipv6=off;
127.0.0.11 is Docker’s embedded DNS server in bridge networks.
Note: If you use a plain
proxy_pass http://app:8080;without variables, Nginx typically resolves at startup and keeps it. That’s fine for stable Compose service names, but can be problematic with changing DNS records.
4.2 Wrong upstream address: localhost and host networking confusion
Symptom: 502 with connect() failed (111: Connection refused) and upstream shown as 127.0.0.1:... or localhost:....
Why it happens: Inside the Nginx container:
localhostmeans the Nginx container, not the host and not the app container.
Fix options:
Option A (recommended): use service name on Docker network
proxy_pass http://app:8080;
Option B: reach a service running on the Docker host
If the upstream is on the host (not in Docker), use:
- Docker Desktop (Mac/Windows):
host.docker.internal - Linux: add a host-gateway mapping in Compose:
# shown here as a concept; tutorial remains Markdown-only
Command-line equivalent for Linux testing:
docker run --rm -it --add-host=host.docker.internal:host-gateway alpine sh
Then in Nginx:
proxy_pass http://host.docker.internal:8080;
4.3 Upstream not listening where you think it is (bind address)
Symptom: From inside Nginx container, nc -vz app 8080 fails, but app logs say “listening on 127.0.0.1:8080”.
Why it happens: The app is bound to loopback inside its own container, so other containers can’t connect.
Fix: bind to 0.0.0.0 Examples:
- Node.js:
node server.js --host 0.0.0.0 --port 8080 - Python (uvicorn):
uvicorn main:app --host 0.0.0.0 --port 8080 - Gunicorn:
gunicorn -b 0.0.0.0:8080 wsgi:app
Then re-test:
docker exec -it nginx nc -vz app 8080
curl -v http://app:8080/
4.4 Nginx starts before upstream is ready (startup race)
Symptom: Nginx returns 502 for the first seconds/minutes after deployment, then “magically” works.
Why it happens:
- Compose
depends_ononly controls start order, not readiness. - Your app needs time for migrations, warming caches, connecting to DB, etc.
Fixes:
Add healthchecks and gate traffic
If you can, add a health endpoint and a Compose healthcheck. Then configure Nginx to retry upstreams (limited) and/or ensure your orchestrator doesn’t route traffic until healthy.
At minimum, add robust proxy timeouts and consider proxy_next_upstream behavior (see later).
Use an upstream block with multiple servers (if applicable)
If you have replicas, Nginx can fail over.
4.5 Timeouts: slow upstreams, large uploads, streaming responses
Symptom: 504 Gateway Timeout, especially on long requests (file uploads, report generation, slow DB queries).
Key Nginx timeouts to understand:
proxy_connect_timeout: time to establish TCP connection to upstream.proxy_send_timeout: time allowed to send request to upstream (client -> Nginx -> upstream).proxy_read_timeout: time allowed between reads of the upstream response (critical for long processing).send_timeout: time allowed to send response to the client.
Important nuance: proxy_read_timeout is not “total request time”. It’s the maximum time between two successive reads from upstream. If your upstream sends nothing for 60s and the timeout is 60s, Nginx will abort.
Fix (common baseline):
location / {
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 300s;
send_timeout 300s;
proxy_pass http://app:8080;
}
For extremely long operations, consider:
- Making the upstream stream periodic output (if possible).
- Offloading to background jobs and returning a job ID.
- Increasing timeouts carefully (but don’t hide a broken app).
Large uploads: client body limits
If uploads fail with 413 Request Entity Too Large or appear as 502/504 due to upstream behavior, set:
client_max_body_size 100m;
client_body_timeout 300s;
Also ensure the upstream supports that size and has its own limits.
4.6 Buffering and “upstream sent too big header”
Symptom: 502 with error log like:
upstream sent too big header while reading response header from upstream
Why it happens: Upstream response headers (often cookies) exceed Nginx buffer sizes.
Fix: increase proxy buffers (carefully)
proxy_buffer_size 16k;
proxy_buffers 8 32k;
proxy_busy_buffers_size 64k;
If you’re proxying to apps that set large cookies (auth tokens, session data), fix the app too—giant cookies are a performance and reliability problem.
Buffering and streaming
If you proxy Server-Sent Events (SSE) or streaming responses, buffering can break real-time behavior. Use:
proxy_buffering off;
proxy_cache off;
4.7 WebSockets and HTTP/1.1 upgrade issues
Symptom: WebSockets connect then immediately close, or never upgrade; Nginx shows 400/502, app shows missing upgrade headers.
Why it happens: WebSockets require:
- HTTP/1.1 to upstream
UpgradeandConnectionheaders passed through
Fix:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
proxy_pass http://app:8080;
}
Test with:
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" http://localhost/ws/
Or use wscat from Node.js tooling.
4.8 TLS/SSL gotchas: termination, re-encryption, redirect loops
SSL issues in reverse proxy setups usually come from mismatched expectations: who terminates TLS, what scheme the app thinks it’s on, and whether redirects are correct.
Scenario A: TLS terminates at Nginx (most common)
Client -> HTTPS -> Nginx -> HTTP -> app
In this case:
- Nginx must present a valid cert.
- Nginx must tell the app the original scheme was HTTPS using
X-Forwarded-Proto: https. - The app must trust proxy headers (framework setting).
Nginx snippet:
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/nginx/certs/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/privkey.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://app:8080;
}
}
Redirect loop symptom: browser loops between http/https or keeps redirecting to https behind https.
Cause: app doesn’t trust X-Forwarded-Proto and thinks it’s on HTTP, so it redirects to HTTPS; but Nginx is already on HTTPS and forwards HTTP to app again—loop.
Fix: configure the app/framework to trust proxy headers.
Examples:
- Express behind proxy:
app.set('trust proxy', true) - Django:
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')USE_X_FORWARDED_HOST = True(if needed)
- Rails:
config.force_ssl = trueand ensure proxy headers are honored.
Scenario B: TLS passthrough (Nginx not terminating)
If Nginx is doing TCP stream proxying (not typical for standard Nginx HTTP reverse proxy), configuration differs. Many “wrong version number” errors happen when you accidentally do:
proxy_pass https://app:8080;but the app is plain HTTP on 8080.
Fix: match scheme to upstream reality:
- If upstream is HTTP:
proxy_pass http://app:8080; - If upstream is HTTPS:
proxy_pass https://app:8443;and configureproxy_ssl_*settings if needed.
Scenario C: re-encrypt to upstream HTTPS
Client -> HTTPS -> Nginx -> HTTPS -> app
This is valid, but you must handle upstream certificates:
location / {
proxy_pass https://app:8443;
proxy_ssl_server_name on;
proxy_ssl_name app; # or the upstream cert's name
# If upstream uses a private CA, mount CA cert and set:
proxy_ssl_trusted_certificate /etc/nginx/ca/ca.pem;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
}
If you don’t want verification (not recommended), you’ll see handshake failures; you can temporarily test with:
proxy_ssl_verify off;
4.9 Wrong Host / X-Forwarded-* headers (apps generating bad URLs)
Symptom:
- App generates redirects to
http://app:8080/...or wrong domain. - OAuth callbacks mismatch.
- Absolute URLs in responses are incorrect.
Why it happens:
Apps often use Host and scheme to generate URLs. If you don’t forward them correctly, the app sees internal names.
Fix baseline headers:
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
Then configure your app to trust these headers.
4.10 IPv6 pitfalls inside containers
Symptom: Nginx tries IPv6 first and fails, or upstream resolves to IPv6 but your network doesn’t support it.
Fix: disable IPv6 in resolver or listen explicitly If using Docker DNS resolver:
resolver 127.0.0.11 ipv6=off;
Also consider:
listen 80;
# avoid: listen [::]:80; unless you need it and it works
5. Reference Docker Compose setup (battle-tested)
Below is a practical Compose example with:
nginxreverse proxyappupstream service- shared network
- mounted Nginx config and certs
- healthcheck for app
Create a directory structure:
mkdir -p reverse-proxy/{nginx,certs}
cd reverse-proxy
touch docker-compose.yml
touch nginx/nginx.conf
Example docker-compose.yml:
services:
nginx:
image: nginx:1.25-alpine
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
depends_on:
app:
condition: service_healthy
networks:
- web
app:
image: node:20-alpine
container_name: app
working_dir: /app
command: sh -c "node server.js"
volumes:
- ./app:/app
expose:
- "8080"
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8080/health"]
interval: 5s
timeout: 2s
retries: 20
networks:
- web
networks:
web:
driver: bridge
Create a minimal upstream app for testing (optional but useful):
mkdir -p app
cat > app/server.js <<'EOF'
const http = require('http');
const server = http.createServer((req, res) => {
if (req.url === '/health') {
res.writeHead(200, {'Content-Type': 'text/plain'});
return res.end('ok');
}
// Simulate slow endpoint
if (req.url.startsWith('/slow')) {
setTimeout(() => {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('slow response done\n');
}, 15000);
return;
}
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('hello from app\n');
});
server.listen(8080, '0.0.0.0', () => {
console.log('listening on 0.0.0.0:8080');
});
EOF
Bring it up:
docker compose up -d
docker compose logs -f --tail=200
6. A hardened Nginx reverse proxy config for Docker
This config includes:
- Proper proxy headers
- Timeouts
- WebSocket support
- Docker DNS resolver
- Reasonable buffer settings
- HTTP -> HTTPS redirect (optional)
- A
/nginx-healthendpoint
Create nginx/nginx.conf:
worker_processes auto;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct=$upstream_connect_time '
'uht=$upstream_header_time urt=$upstream_response_time';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
# Docker embedded DNS (important if you use variables or want periodic re-resolve)
resolver 127.0.0.11 valid=30s ipv6=off;
# WebSocket upgrade mapping
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Upstream definition (optional but nice for clarity)
upstream app_upstream {
server app:8080;
keepalive 32;
}
# HTTP server (redirect to HTTPS if you have TLS)
server {
listen 80;
server_name _;
location = /nginx-health {
access_log off;
return 200 "ok\n";
}
# If you don't want HTTPS, you can proxy here instead of redirecting.
# return 301 https://$host$request_uri;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 300s;
send_timeout 300s;
proxy_buffer_size 16k;
proxy_buffers 8 32k;
proxy_busy_buffers_size 64k;
client_max_body_size 50m;
proxy_pass http://app_upstream;
}
}
# HTTPS server (enable if you have certs mounted)
server {
listen 443 ssl http2;
server_name _;
ssl_certificate /etc/nginx/certs/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/privkey.pem;
# Modern baseline; adjust to your compliance needs
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location = /nginx-health {
access_log off;
return 200 "ok\n";
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_read_timeout 3600s;
proxy_pass http://app_upstream;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 300s;
client_max_body_size 50m;
proxy_pass http://app_upstream;
}
}
}
Reload Nginx after changes:
docker exec -it nginx nginx -t
docker exec -it nginx nginx -s reload
7. Step-by-step troubleshooting playbook
Use this sequence to avoid guessing.
Step 1: Confirm the upstream is healthy and listening correctly
From host:
docker logs --tail=200 app
docker exec -it app sh -c "netstat -tulpn 2>/dev/null || ss -tulpn"
Look for 0.0.0.0:8080 (good) vs 127.0.0.1:8080 (problem).
Step 2: Confirm Nginx can reach the upstream over the Docker network
docker exec -it nginx sh -c "getent hosts app && nc -vz app 8080"
docker exec -it nginx sh -c "curl -v http://app:8080/health"
If DNS fails:
- ensure same network
- consider
resolver 127.0.0.11
If TCP fails:
- wrong port
- upstream not listening
- network mismatch
Step 3: Inspect Nginx config actually loaded
docker exec -it nginx nginx -T | less
Search for:
proxy_passserver_namelisten- conflicting
locationblocks - accidental
return 301causing loops
Step 4: Read the Nginx error log around a failing request
docker exec -it nginx tail -n 200 /var/log/nginx/error.log
Map log patterns to causes:
connect() failed (111: Connection refused)→ upstream down/wrong port/bind addresshost not found in upstream→ DNS/service name/networkupstream timed out ... while reading response header→proxy_read_timeouttoo low or upstream too slowupstream sent too big header→ buffer sizes/cookiesSSL_do_handshake() failed→ upstream HTTPS mismatch or cert verification
Step 5: Reproduce with curl to separate client issues from proxy issues
From host to Nginx:
curl -v http://localhost/
curl -vk https://localhost/ # -k ignores cert validation for local testing
From Nginx to upstream (inside container):
docker exec -it nginx curl -v http://app:8080/
If upstream works internally but fails through Nginx, it’s likely headers, timeouts, buffering, or routing rules.
Step 6: Fix timeouts based on real behavior, not guesses
If /slow takes 15 seconds and you get 504 at 10 seconds, increase proxy_read_timeout above 15 seconds:
proxy_read_timeout 30s;
Then reload and retest:
docker exec -it nginx nginx -t && docker exec -it nginx nginx -s reload
curl -v http://localhost/slow
Step 7: Validate TLS chain and redirect logic
Check certificate details:
openssl s_client -connect localhost:443 -servername example.com -showcerts </dev/null
Check redirects:
curl -I http://localhost/
curl -Ik https://localhost/
If you see repeated 301/302 bouncing between http and https, fix:
- Nginx redirect rules
X-Forwarded-Proto- app “trust proxy” settings
Step 8: Watch for resource exhaustion (hidden cause of timeouts)
504s can be caused by upstream CPU starvation, DB locks, or OOM kills.
Check resource usage:
docker stats
docker inspect app --format '{{json .State.OOMKilled}}'
If OOMKilled is true, increase memory limits or reduce app memory usage.
8. Verification checklist
After applying fixes, verify systematically:
-
DNS and network
docker exec -it nginx getent hosts app docker exec -it nginx nc -vz app 8080 -
Upstream health
docker exec -it nginx curl -v http://app:8080/health -
Nginx config validity
docker exec -it nginx nginx -t docker exec -it nginx nginx -T | head -n 50 -
End-to-end HTTP
curl -v http://localhost/ -
End-to-end HTTPS (if enabled)
curl -vk https://localhost/ -
Timeout-sensitive endpoint
curl -v http://localhost/slow -
Headers correctness (scheme/host)
curl -v http://localhost/ -H 'Host: example.com' -
Log sanity
docker exec -it nginx tail -n 50 /var/log/nginx/error.log docker exec -it nginx tail -n 50 /var/log/nginx/access.log
Closing notes: how to prevent 502/504s long-term
- Prefer service-name networking (
app:8080) over published ports andlocalhost. - Ensure upstream binds to 0.0.0.0, not 127.0.0.1.
- Add healthchecks and avoid routing traffic to unready services.
- Tune timeouts to match real request characteristics; don’t mask slow apps indefinitely.
- Forward correct X-Forwarded-* headers and configure the app to trust the proxy.
- Treat TLS as a design choice: decide where it terminates, then make scheme/redirect behavior consistent.
If you share your nginx.conf, docker-compose.yml, and the exact Nginx error log line for a failing request, the diagnosis can be narrowed to a specific fix quickly.