← Back to Tutorials

Deploying Applications with Docker Compose: Advanced Patterns for Production

docker composedockercontainer deploymentdevopsci/cdinfrastructure as codemicroservicessecrets managementzero downtime deploymentobservability

Deploying Applications with Docker Compose: Advanced Patterns for Production

Docker Compose is often introduced as a local-development convenience: “run my app plus a database with one command.” In production, the same tool can be used responsibly—provided you treat the Compose file as an operational artifact, design for failure, and adopt patterns that support upgrades, observability, and security.

This tutorial focuses on advanced production patterns for Docker Compose, including: multi-service architecture, health checks and dependency gating, zero/low-downtime deployments, secrets, hardened containers, logging/metrics, reverse proxy and TLS, data durability, and operational workflows.

Assumptions: You have Docker Engine and the docker compose plugin installed (Compose v2). You are deploying to a single host (VM or bare metal). If you need multi-node orchestration, consider Kubernetes or Swarm, but many teams successfully run Compose on a single production node.


Table of Contents


1. Baseline: What “production Compose” means

Compose in production is viable when you embrace these constraints and practices:


2. Project structure and environment strategy

A robust layout:

myapp/
  compose.yaml
  compose.prod.yaml
  compose.monitoring.yaml
  .env
  env/
    prod.env
  secrets/
    db_password.txt
    jwt_secret.txt
  nginx/
    conf.d/
      app.conf
  scripts/
    backup-db.sh
    deploy.sh

Compose file layering

Compose supports multiple files:

Run:

docker compose -f compose.yaml -f compose.prod.yaml up -d

Environment variables: .env vs env_file

A common pattern:

Example .env:

TAG=2026-02-16_9f2c1a7
DOMAIN=example.com
LETSENCRYPT_EMAIL=ops@example.com

3. A production-grade Compose stack (example)

Below is a realistic stack:

Create compose.yaml:

services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: app
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - backend
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d app"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

  redis:
    image: redis:7-alpine
    command: ["redis-server", "--appendonly", "yes"]
    volumes:
      - redisdata:/data
    networks:
      - backend
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 10
    restart: unless-stopped

  app:
    image: ghcr.io/acme/myapp:${TAG}
    env_file:
      - ./env/prod.env
    environment:
      DATABASE_URL: postgres://app@postgres:5432/app
      DATABASE_PASSWORD_FILE: /run/secrets/db_password
      REDIS_URL: redis://redis:6379/0
      JWT_SECRET_FILE: /run/secrets/jwt_secret
    secrets:
      - db_password
      - jwt_secret
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks:
      - backend
      - edge
    healthcheck:
      test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:8080/healthz || exit 1"]
      interval: 10s
      timeout: 3s
      retries: 10
      start_period: 20s
    restart: unless-stopped

  worker:
    image: ghcr.io/acme/myapp:${TAG}
    env_file:
      - ./env/prod.env
    environment:
      DATABASE_URL: postgres://app@postgres:5432/app
      DATABASE_PASSWORD_FILE: /run/secrets/db_password
      REDIS_URL: redis://redis:6379/0
    secrets:
      - db_password
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    command: ["./bin/worker"]
    networks:
      - backend
    restart: unless-stopped

  migrate:
    image: ghcr.io/acme/myapp:${TAG}
    env_file:
      - ./env/prod.env
    environment:
      DATABASE_URL: postgres://app@postgres:5432/app
      DATABASE_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password
    depends_on:
      postgres:
        condition: service_healthy
    command: ["./bin/migrate"]
    networks:
      - backend
    restart: "no"
    profiles: ["ops"]

  caddy:
    image: caddy:2-alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - edge
    restart: unless-stopped

networks:
  backend:
    internal: true
  edge:

volumes:
  pgdata:
  redisdata:
  caddy_data:
  caddy_config:

secrets:
  db_password:
    file: ./secrets/db_password.txt
  jwt_secret:
    file: ./secrets/jwt_secret.txt

Why this design works


4. Health checks, readiness, and dependency gating

Health checks are not optional

Without health checks, Compose can start containers in dependency order but cannot know when a service is actually ready. In production, “container started” is not “service ready.”

Examples:

Use start_period for apps that need warm-up time.

depends_on with conditions (Compose v2)

Compose supports:

depends_on:
  postgres:
    condition: service_healthy

This prevents the app from starting until Postgres health check passes. It does not guarantee that the app will never see transient failures; it just improves startup reliability.

Make your health endpoint meaningful

A good /healthz should check:

If you can, implement two endpoints:

Then configure the Compose health check to use readiness:

healthcheck:
  test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:8080/readyz || exit 1"]

5. Reverse proxy, TLS, and safe exposure

Publishing ports directly from the app container is tempting but risky. A reverse proxy provides:

Caddy configuration (automatic TLS)

Create caddy/Caddyfile:

{
  email {$LETSENCRYPT_EMAIL}
}

{$DOMAIN} {
  encode gzip zstd

  @health path /healthz /readyz /livez
  handle @health {
    reverse_proxy app:8080
  }

  handle {
    reverse_proxy app:8080
  }

  header {
    Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "DENY"
    Referrer-Policy "no-referrer"
  }
}

Export variables so Caddy can read them:

export DOMAIN=example.com
export LETSENCRYPT_EMAIL=ops@example.com
docker compose up -d caddy

Only the proxy publishes ports

In the Compose example, only caddy has:

ports:
  - "80:80"
  - "443:443"

Everything else stays private.


6. Secrets and sensitive configuration

Prefer file-based secrets

Compose “secrets” are best when using Docker Swarm, but even in non-Swarm mode Compose will mount the secret file into the container at /run/secrets/<name>.

Create secrets:

mkdir -p secrets
openssl rand -base64 32 > secrets/db_password.txt
openssl rand -base64 64 > secrets/jwt_secret.txt
chmod 0400 secrets/*.txt

Use *_FILE environment variables

Many images support *_FILE natively (e.g., Postgres). For your app, implement a small config loader that reads from a file path if present.

Example runtime environment:

environment:
  DATABASE_PASSWORD_FILE: /run/secrets/db_password

Avoid leaking secrets into logs

Common pitfalls:


7. Hardening containers: least privilege and safer defaults

Production Compose should explicitly reduce container privileges.

Run as non-root

If your image supports it:

user: "10001:10001"

If it does not, update your Dockerfile to create and use a non-root user.

Read-only filesystem + tmpfs

For services that don’t need to write to the container filesystem:

read_only: true
tmpfs:
  - /tmp
  - /run

Be careful: some apps need writable directories for caches, PID files, or certificates. Mount specific writable volumes rather than allowing broad writes.

Drop Linux capabilities

Most apps don’t need extra capabilities:

cap_drop:
  - ALL
security_opt:
  - no-new-privileges:true

If something breaks, add back only what you need (principle of least privilege).

Limit host exposure


8. Data durability: volumes, backups, and migrations

Named volumes and persistence

Named volumes (pgdata, redisdata) persist across container recreation:

docker volume ls
docker volume inspect myapp_pgdata

Backing up Postgres (real commands)

A simple backup script scripts/backup-db.sh:

#!/usr/bin/env bash
set -euo pipefail

TS="$(date -u +%Y%m%dT%H%M%SZ)"
OUT="backup_${TS}.sql.gz"

docker compose exec -T postgres sh -lc \
  'pg_dump -U app -d app' | gzip -9 > "$OUT"

echo "Wrote $OUT"

Run:

chmod +x scripts/backup-db.sh
./scripts/backup-db.sh

For large databases, consider pg_dump options, pg_dumpall, or physical backups. Also consider storing backups off-host (S3, etc.).

Migrations as a one-shot job

The migrate service is under an ops profile:

docker compose --profile ops run --rm migrate

This pattern ensures migrations run with the same image and config as the app, reducing “works on CI but not in prod” drift.


9. Logging and observability patterns

Default logging driver and rotation

Docker’s default json-file logging can fill disks if you don’t rotate. Add logging options per service (or via x-logging extension fields).

Example snippet:

x-logging: &default-logging
  driver: "json-file"
  options:
    max-size: "10m"
    max-file: "5"

services:
  app:
    logging: *default-logging
  worker:
    logging: *default-logging
  postgres:
    logging: *default-logging

Centralized logs (optional)

If you need centralized logs, common patterns include:

Example: use journald (host must support it):

logging:
  driver: journald

Metrics and health visibility

Even without a full monitoring stack, you can inspect health:

docker compose ps
docker inspect --format='{{json .State.Health}}' myapp-app-1 | jq

If you add Prometheus + Grafana, ensure they are on a private network and protected (basic auth, VPN, or firewall rules).


10. Deployment workflows: upgrades, rollbacks, and zero-downtime-ish patterns

Compose does not provide rolling updates like Kubernetes. Still, you can implement safe workflows.

Immutable tags and controlled upgrades

Build and push:

docker build -t ghcr.io/acme/myapp:9f2c1a7 .
docker push ghcr.io/acme/myapp:9f2c1a7

Update .env:

sed -i.bak 's/^TAG=.*/TAG=9f2c1a7/' .env

Pull and recreate:

docker compose pull
docker compose up -d

Run migrations before switching traffic

A common sequence:

docker compose pull app worker migrate
docker compose --profile ops run --rm migrate
docker compose up -d app worker

Low-downtime pattern: blue/green with two Compose projects

You can run two stacks side-by-side by using different project names and different “edge” attachment, then switch the proxy upstream.

  1. Start “blue”:
docker compose -p myapp-blue up -d
  1. Start “green” with a new tag:
TAG=9f2c1a7 docker compose -p myapp-green up -d
  1. Point the reverse proxy to the green app (how you do this depends on proxy). With Caddy, you might maintain two upstreams and flip by editing the Caddyfile and reloading:
docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile
  1. After verifying, remove blue:
docker compose -p myapp-blue down

Trade-offs:

Rollback strategy

Rollback is simply re-deploying the previous tag:

sed -i.bak 's/^TAG=.*/TAG=previous_sha/' .env
docker compose pull
docker compose up -d

If migrations were destructive, rollback may be impossible. Production-grade systems often use:


11. Resource management and reliability

Restart policies

Use:

restart: unless-stopped

This restarts containers after crashes or host reboots.

Memory/CPU limits (Compose “deploy” caveat)

The deploy: section is primarily for Swarm, but Compose v2 supports some resource constraints. For portability, test on your target environment.

Example:

services:
  app:
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M

If you find deploy is ignored in your setup, consider using:

Ulimits and file descriptors

High-load services often need more open files:

ulimits:
  nofile:
    soft: 65535
    hard: 65535

Graceful shutdowns

Ensure your app handles SIGTERM and stops accepting new requests before exiting. Configure stop grace period:

stop_grace_period: 30s

12. Operational commands and troubleshooting

Inspect running services

docker compose ps
docker compose logs -f app
docker compose logs --since=1h worker

Execute commands inside containers

docker compose exec app sh
docker compose exec postgres psql -U app -d app

Validate configuration

docker compose config
docker compose config --profiles

Check health status

docker compose ps
docker inspect --format='{{.State.Health.Status}}' myapp-app-1

Prune safely (disk pressure)

Be cautious: pruning can remove unused images needed for rollbacks.

See usage:

docker system df
docker image ls
docker volume ls

Remove dangling images:

docker image prune

Remove unused images (more aggressive):

docker image prune -a

13. Compose pitfalls in production (and mitigations)

Pitfall: Exposing databases with ports:

Mitigation: remove published ports; use internal networks. If you need admin access, use SSH tunneling:

ssh -L 5432:127.0.0.1:5432 user@server

(Then temporarily publish Postgres only to localhost with 127.0.0.1:5432:5432, or use docker compose exec for psql.)

Pitfall: Unbounded logs filling disk

Mitigation: configure log rotation (max-size, max-file) or use journald/centralized logging.

Pitfall: Mutable image tags like latest

Mitigation: use immutable tags (Git SHA) and keep a deployment record.

Pitfall: No backup/restore rehearsal

Mitigation: regularly test restoring backups to a staging environment.

Pitfall: “Works after restart” dependency issues

Mitigation: health checks + readiness endpoints + retry logic in the application.

Pitfall: Secrets in environment variables

Mitigation: file-based secrets and careful logging hygiene.


Putting it all together: a practical deployment runbook

A minimal, repeatable deployment flow could look like this:

  1. Update tag and pull images:
export TAG=9f2c1a7
docker compose pull
  1. Run migrations:
docker compose --profile ops run --rm migrate
  1. Recreate app and worker:
docker compose up -d app worker
  1. Verify health:
docker compose ps
curl -fsS https://example.com/healthz
  1. Check logs briefly:
docker compose logs --since=10m app worker

Conclusion

Docker Compose can support production deployments when you treat it as a disciplined operational tool: isolate networks, terminate TLS at a proxy, use health checks and meaningful readiness, handle secrets safely, persist and back up data, rotate logs, and adopt an upgrade/rollback workflow built around immutable image tags.

If you want, share your current compose.yaml (redact secrets), and I can suggest concrete production hardening changes tailored to your stack (networks, health checks, logging, and deployment flow).