← Back to Tutorials

Local vs Production Docker Compose: Eliminate Environment Drift and Deployment Surprises

docker-composedockerdevopsenvironment-driftdeploymentcontainersci-cdinfrastructure

Local vs Production Docker Compose: Eliminate Environment Drift and Deployment Surprises

Environment drift happens when “it works on my machine” turns into “why is it failing in production?” The root cause is usually not Docker itself, but differences in configuration, dependencies, runtime assumptions, and operational behaviors between local and production environments.

This tutorial shows a practical, repeatable approach to using Docker Compose for local development and Compose (or Compose-compatible tooling) for production without surprises. You’ll build a pattern around:

You’ll see real commands throughout, and you’ll end with a workflow that makes drift obvious and preventable.


Table of contents


1. What “environment drift” really means in Compose

In Docker Compose projects, drift commonly comes from:

  1. Different images

    • Local uses build: . (latest code, local Dockerfile changes)
    • Production uses image: myapp:oldtag or a different base image
  2. Different runtime configuration

    • Local uses .env with debug flags
    • Production injects env vars differently or misses required variables
  3. Different dependencies

    • Local runs Postgres 16, production runs Postgres 13
    • Local uses Redis without persistence, production uses persistence
  4. Different storage

    • Local uses bind mounts (./:/app) and ephemeral volumes
    • Production uses named volumes, different permissions, different paths
  5. Different networking/ports

    • Local publishes ports to host (ports: "8080:8080")
    • Production doesn’t publish ports (behind a reverse proxy), or publishes different ports
  6. Different process models

    • Local runs npm run dev with hot reload
    • Production runs node server.js and expects graceful shutdown
  7. Different operational behaviors

    • Restart policies, healthchecks, logging drivers, resource limits

Eliminating drift doesn’t mean local must look exactly like production (developer ergonomics matter). It means you intentionally control and document differences and keep the “core runtime contract” consistent.


2. Principles for eliminating drift

Use these principles as guardrails:


A practical layout:

myapp/
  compose.yaml
  compose.override.yaml
  compose.prod.yaml
  .env
  .env.prod
  docker/
    nginx/
      nginx.conf
  app/
    Dockerfile
    ...

Notes:


4. A base Compose file that is truly shared

Create compose.yaml:

services:
  web:
    image: myapp-web:${APP_IMAGE_TAG:-dev}
    build:
      context: ./app
      dockerfile: Dockerfile
    environment:
      APP_ENV: ${APP_ENV:-local}
      DATABASE_URL: ${DATABASE_URL:-postgresql://app:app@db:5432/app}
      REDIS_URL: ${REDIS_URL:-redis://redis:6379/0}
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks:
      - appnet

  db:
    image: postgres:16
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD: app
      POSTGRES_DB: app
    volumes:
      - dbdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d app"]
      interval: 5s
      timeout: 3s
      retries: 20
    networks:
      - appnet

  redis:
    image: redis:7
    command: ["redis-server", "--appendonly", "yes"]
    volumes:
      - redisdata:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 20
    networks:
      - appnet

networks:
  appnet:

volumes:
  dbdata:
  redisdata:

Why this base works

Key decision: image + build

Notice web has both:

This is a useful pattern:


5. Local override: fast feedback, dev UX, safe defaults

Create compose.override.yaml (auto-loaded for local):

services:
  web:
    ports:
      - "8080:8080"
    environment:
      APP_ENV: local
      LOG_LEVEL: debug
    volumes:
      - ./app:/app
    command: ["sh", "-c", "npm install && npm run dev"]

What this changes (and why)

Local commands you’ll actually run

Start the stack:

docker compose up -d

Follow logs:

docker compose logs -f web

Rebuild after Dockerfile changes:

docker compose build web
docker compose up -d --no-deps web

Reset everything (including volumes):

docker compose down -v

6. Production override: immutable images, resilience, observability

Create compose.prod.yaml:

services:
  web:
    build: null
    image: myregistry.example.com/myapp-web:${APP_IMAGE_TAG}
    environment:
      APP_ENV: production
      LOG_LEVEL: info
    restart: unless-stopped
    ports:
      - "8080:8080"
    healthcheck:
      test: ["CMD-SHELL", "wget -qO- http://localhost:8080/health || exit 1"]
      interval: 10s
      timeout: 3s
      retries: 10

  db:
    restart: unless-stopped

  redis:
    restart: unless-stopped

Production differences explained

Render and run production config

Render the final config:

docker compose -f compose.yaml -f compose.prod.yaml config

Run it:

export APP_IMAGE_TAG="2026-04-19.1"
docker compose -f compose.yaml -f compose.prod.yaml up -d

7. Profiles: optional services without copy-paste

Profiles let you define services that only run when requested. Common examples:

Add to compose.yaml:

services:
  adminer:
    image: adminer:4
    profiles: ["debug"]
    ports:
      - "8081:8080"
    networks:
      - appnet

Run with profile:

docker compose --profile debug up -d

In production, you simply don’t enable that profile.


8. Environment variables: .env vs env_file vs runtime injection

Compose supports multiple ways to set environment variables. Misunderstanding them is a major drift source.

.env (project-level)

Example .env:

APP_IMAGE_TAG=dev
APP_ENV=local

environment: (container env)

In compose.yaml:

environment:
  APP_ENV: ${APP_ENV:-local}

This sets the container’s APP_ENV.

env_file: (bulk container env)

For local:

Create .env.local.runtime:

LOG_LEVEL=debug
FEATURE_X=true

Then in compose.override.yaml:

services:
  web:
    env_file:
      - .env.local.runtime

For production, avoid committing secrets into env_file. Prefer runtime injection (CI/CD, SSH session, secrets store).

Verify what the container receives

docker compose exec web env | sort

9. Secrets: stop baking credentials into Compose files

Environment variables are convenient but not ideal for secrets. Compose supports secrets in a few modes:

A pragmatic approach: mount secret files and have the app read them.

Create secrets/db_password.txt (do not commit it):

mkdir -p secrets
printf "supersecret\n" > secrets/db_password.txt
chmod 600 secrets/db_password.txt

In compose.prod.yaml:

services:
  web:
    secrets:
      - db_password
    environment:
      DB_PASSWORD_FILE: /run/secrets/db_password

secrets:
  db_password:
    file: ./secrets/db_password.txt

Your app reads the password from DB_PASSWORD_FILE.

Why this reduces drift: production secrets handling becomes explicit and testable, rather than “some env var exists somewhere”.


10. Data persistence: volumes, bind mounts, and drift traps

Named volumes are managed by Docker and behave consistently:

volumes:
  dbdata:

They persist across container restarts.

Bind mounts (useful for code, risky for state)

Bind mounts depend on host filesystem behavior and permissions. They are great for local code mounts:

volumes:
  - ./app:/app

But they can create drift if you bind-mount config or data in production that doesn’t exist or has wrong permissions.

Inspect volumes and disk usage

List volumes:

docker volume ls

Inspect a specific volume:

docker volume inspect myapp_dbdata

See disk usage:

docker system df

11. Healthchecks and startup order: depends_on is not enough

Without healthchecks, depends_on only controls start order, not readiness. Your app may start before Postgres is ready.

In the base file we used:

depends_on:
  db:
    condition: service_healthy

This requires:

Test health status

docker compose ps

You should see healthy for db and redis.

To inspect health logs:

docker inspect --format='{{json .State.Health}}' myapp-db-1 | jq

(If you don’t have jq, omit it and read raw JSON.)


12. Migrations and one-off jobs: run, don’t “hope”

A classic drift problem: local dev runs migrations manually, production forgets.

Use docker compose run for one-off tasks.

Example (Node/Prisma style):

docker compose run --rm web npx prisma migrate deploy

Or (Django):

docker compose run --rm web python manage.py migrate

Production-safe pattern: a dedicated migration service

Add to compose.yaml:

services:
  migrate:
    image: myapp-web:${APP_IMAGE_TAG:-dev}
    build:
      context: ./app
    command: ["sh", "-c", "npm run migrate"]
    depends_on:
      db:
        condition: service_healthy
    networks:
      - appnet
    profiles: ["ops"]

Run migrations when needed:

docker compose --profile ops run --rm migrate

This keeps migrations consistent across environments.


13. Building images: dev builds vs CI builds

Local: build from your working tree

docker compose build web

CI: build once, run everywhere

A common pipeline:

  1. Build image
  2. Tag with commit SHA and/or semantic version
  3. Push to registry
  4. Deploy by setting APP_IMAGE_TAG

Example commands:

# Build
docker build -t myregistry.example.com/myapp-web:$(git rev-parse --short HEAD) ./app

# Push
docker push myregistry.example.com/myapp-web:$(git rev-parse --short HEAD)

Deploy:

export APP_IMAGE_TAG="$(git rev-parse --short HEAD)"
docker compose -f compose.yaml -f compose.prod.yaml up -d

Stronger immutability: digests

If your tooling supports it, prefer digests:

docker pull myregistry.example.com/myapp-web:2026-04-19.1
docker inspect --format='{{index .RepoDigests 0}}' myregistry.example.com/myapp-web:2026-04-19.1

Then deploy using the digest (exact artifact).


14. Verifying parity: “config” is your best friend

Before you run anything, render the final Compose config.

Local rendered config

docker compose config > /tmp/compose.local.rendered.yaml

Production rendered config

docker compose -f compose.yaml -f compose.prod.yaml config > /tmp/compose.prod.rendered.yaml

Now compare:

diff -u /tmp/compose.local.rendered.yaml /tmp/compose.prod.rendered.yaml | less

You’re looking for intentional differences (ports, command, volumes) and catching accidental ones (different images, missing env vars, different networks).

Validate required variables

If production requires variables with no defaults, Compose will warn or fail. You can enforce this by omitting defaults:

image: myregistry.example.com/myapp-web:${APP_IMAGE_TAG}

If APP_IMAGE_TAG is missing:

docker compose -f compose.yaml -f compose.prod.yaml config

It should complain, which is good—fail early.


15. Deployment patterns with Compose

Compose is not Kubernetes, but it can be a solid production tool for single-host deployments.

Pattern A: Single host, direct Compose

On the server:

git pull
export APP_IMAGE_TAG="2026-04-19.1"
docker compose -f compose.yaml -f compose.prod.yaml pull
docker compose -f compose.yaml -f compose.prod.yaml up -d
docker compose -f compose.yaml -f compose.prod.yaml ps

Pattern B: Reverse proxy as a separate stack

Often you run Traefik/Nginx as an “edge” stack, and your app stack attaches to a shared network.

Create an external network:

docker network create edge

In compose.prod.yaml add:

networks:
  edge:
    external: true

services:
  web:
    networks:
      - appnet
      - edge

Now your reverse proxy can route to web without publishing ports directly.

Pattern C: Blue/green-ish with project names

Compose project names isolate resources. You can run two versions side-by-side:

export APP_IMAGE_TAG="2026-04-19.1"
docker compose -p myapp_green -f compose.yaml -f compose.prod.yaml up -d

export APP_IMAGE_TAG="2026-04-12.3"
docker compose -p myapp_blue -f compose.yaml -f compose.prod.yaml up -d

Then switch routing at the proxy layer. This is more advanced but can reduce downtime.


16. Common drift scenarios and how to prevent them

Drift: “Works locally, fails in prod due to missing system libs”

Cause: local bind mount uses host-installed tooling; production image lacks it.
Fix: ensure the Dockerfile installs everything needed; avoid relying on host tools.

Verification:

docker compose exec web sh -lc 'node -v && npm -v'

Drift: “Different database versions”

Cause: local uses postgres:16, prod uses managed Postgres 13.
Fix: pin versions intentionally and test migrations against the production version in staging. Consider using the same major version locally as production.

Drift: “Local uses DEBUG mode and permissive CORS”

Cause: local override sets flags that accidentally leak into prod.
Fix: keep production env vars in compose.prod.yaml or injected at deploy time; don’t reuse .env across environments.

Drift: “File permissions break in production”

Cause: bind mounts and UID/GID differences.
Fix: run as a non-root user in Dockerfile consistently; avoid bind-mounting writable dirs in prod unless you control permissions.

Check user:

docker compose exec web id

Drift: “depends_on didn’t wait for DB”

Cause: no healthcheck.
Fix: add healthchecks and use condition: service_healthy.


17. A complete example workflow

This section ties everything together into a repeatable routine.

Step 1: Local development

Start:

docker compose up -d

Iterate:

docker compose logs -f web

Run tests inside the container (keeps tooling consistent):

docker compose exec web npm test

Step 2: Render and review production config before shipping

export APP_IMAGE_TAG="2026-04-19.1"
docker compose -f compose.yaml -f compose.prod.yaml config | less

Step 3: Build and push image (CI or local for demonstration)

docker build -t myregistry.example.com/myapp-web:${APP_IMAGE_TAG} ./app
docker push myregistry.example.com/myapp-web:${APP_IMAGE_TAG}

Step 4: Deploy on the server

export APP_IMAGE_TAG="2026-04-19.1"
docker compose -f compose.yaml -f compose.prod.yaml pull
docker compose -f compose.yaml -f compose.prod.yaml up -d

Check status:

docker compose -f compose.yaml -f compose.prod.yaml ps
docker compose -f compose.yaml -f compose.prod.yaml logs --tail=200 web

Step 5: Run migrations explicitly (if applicable)

docker compose -f compose.yaml -f compose.prod.yaml --profile ops run --rm migrate

Step 6: Rollback (simple but effective)

If you tag images by version, rollback is just redeploying the previous tag:

export APP_IMAGE_TAG="2026-04-12.3"
docker compose -f compose.yaml -f compose.prod.yaml up -d

Closing checklist: drift-resistant Compose projects

Use this checklist to audit your setup:

If you adopt the base+override+profiles approach and make docker compose config part of your routine, most “deployment surprises” become either impossible or immediately obvious—exactly what you want.