← Back to Tutorials

Docker for Local Development vs Production: Architecture and Workflow Trade-offs

dockerlocal developmentproductiondevopscontainersdocker composeci/cdarchitectureworkflowbest practices

Docker for Local Development vs Production: Architecture and Workflow Trade-offs

Docker is often introduced as “it works on my machine” insurance, but the real value (and the real pain) comes from deciding how you use containers in different environments. A setup that feels perfect for local development can be slow, insecure, or operationally awkward in production. Conversely, a production-optimized image can be miserable to iterate on locally.

This tutorial explains the architectural and workflow trade-offs between local development and production Docker usage, with concrete patterns, real commands, and practical examples. The goal is to help you design a container strategy that is fast for developers, reliable for operations, and consistent across environments without forcing them to be identical.


Table of Contents


1. Mental model: what changes between dev and prod

The biggest misconception is that dev and prod should be identical. They should be compatible, not identical.

What changes:

A useful principle:

Dev containers are tools. Production containers are products.


2. Images vs containers vs volumes: the moving parts

You’ll make better trade-offs if you keep these concepts distinct:

Key commands:

# List images
docker images

# List containers (running)
docker ps

# List all containers
docker ps -a

# List volumes
docker volume ls

In local dev, you’ll often rely on:

In production, you’ll prefer:


3. What “good” looks like locally

3.1 Fast feedback loops

Local development success is measured in seconds:

Common local workflow:

# Build and start
docker compose up --build

# Follow logs for one service
docker compose logs -f web

# Open a shell in a running container
docker compose exec web sh

If rebuilds are slow, developers will bypass Docker. That defeats the goal of consistency.

3.2 Bind mounts vs baked-in code

Bind mounts are the default for local iteration:

Example (Compose snippet):

services:
  web:
    volumes:
      - ./src:/app/src

Baked-in code (copying source into the image) is closer to production:

A common compromise:

3.3 Dev dependencies and debugging tools

Local containers often include:

This is fine locally, but these tools enlarge images and increase attack surface in production.

You can explicitly install dev-only tools in a dev stage:

# syntax=docker/dockerfile:1

FROM node:22-alpine AS dev
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["npm", "run", "dev"]

Then have a separate production stage with fewer dependencies (shown later).

3.4 Local databases and state

Local stacks often include Postgres/Redis/etc. via Compose:

docker compose up -d db redis

You’ll typically want:

Commands:

# Reset everything including volumes (DANGEROUS: deletes DB data)
docker compose down -v

# Remove dangling volumes not used by any container
docker volume prune

Local trade-off: persistence improves convenience; easy resets improve test reliability.


4. What “good” looks like in production

4.1 Immutability and repeatability

In production, you want:

A production container should start from docker run with only runtime configuration:

docker run -d \
  --name myapp \
  -p 8080:8080 \
  -e NODE_ENV=production \
  myorg/myapp:1.4.2

If you need to “patch” something, you rebuild and redeploy.

4.2 Security posture

Production images should:

Example hardening steps:

Runtime example:

docker run -d \
  --read-only \
  --tmpfs /tmp:rw,noexec,nosuid,size=64m \
  --cap-drop ALL \
  --security-opt no-new-privileges \
  -p 8080:8080 \
  myorg/myapp:1.4.2

Not every app can run fully read-only, but it’s a useful target.

4.3 Observability and operability

Production containers should:

Logging rule of thumb:

If logs are written to /var/log inside the container, you’re probably doing it wrong.

Check logs:

docker logs -f myapp

4.4 Scaling and orchestration assumptions

Local Compose is a single machine. Production might include:

This affects your architecture:


5. Dockerfile patterns: dev vs prod

5.1 Single Dockerfile with multi-stage targets

This is often the best balance: one Dockerfile, multiple targets.

Example for a Node.js web app:

# syntax=docker/dockerfile:1

FROM node:22-alpine AS base
WORKDIR /app
COPY package*.json ./

FROM base AS deps
RUN npm ci

FROM deps AS dev
COPY . .
ENV NODE_ENV=development
CMD ["npm", "run", "dev"]

FROM deps AS build
COPY . .
RUN npm run build

FROM node:22-alpine AS prod
WORKDIR /app
ENV NODE_ENV=production

# Create non-root user
RUN addgroup -S app && adduser -S app -G app

COPY --from=build /app/dist /app/dist
COPY --from=deps /app/node_modules /app/node_modules
COPY package*.json ./

USER app
EXPOSE 8080
CMD ["node", "dist/server.js"]

Build and run dev target:

docker build --target dev -t myapp:dev .
docker run --rm -p 8080:8080 myapp:dev

Build and run prod target:

docker build --target prod -t myapp:prod .
docker run --rm -p 8080:8080 myapp:prod

Trade-offs:

5.2 Separate Dockerfiles

You might use:

This can be simpler for teams early on, but it risks drift:

If you go this route, keep them intentionally aligned and review both in PRs.

Build with a custom Dockerfile:

docker build -f Dockerfile.dev -t myapp:dev .

5.3 Caching and layer strategy

Build performance matters in both dev and CI.

Key idea: copy dependency manifests first, install deps, then copy source.

Bad (invalidates cache on every change):

COPY . .
RUN npm ci

Better:

COPY package*.json ./
RUN npm ci
COPY . .

Use BuildKit for better performance:

DOCKER_BUILDKIT=1 docker build -t myapp:prod .

6. Docker Compose patterns: dev vs prod

6.1 A practical Compose dev setup

A typical dev stack includes:

Example compose.yaml:

services:
  web:
    build:
      context: .
      target: dev
    ports:
      - "8080:8080"
    environment:
      DATABASE_URL: postgres://app:app@db:5432/app
      REDIS_URL: redis://redis:6379
    volumes:
      - ./:/app
      - /app/node_modules
    depends_on:
      - db
      - redis
    command: ["npm", "run", "dev"]

  db:
    image: postgres:16
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD: app
      POSTGRES_DB: app
    ports:
      - "5432:5432"
    volumes:
      - db_data:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  db_data:

Notes on the volumes:

Run it:

docker compose up --build

Recreate a single service:

docker compose up -d --no-deps --build web

6.2 Production Compose: when it fits and when it doesn’t

Compose can be used in production for:

But it lacks many orchestration features (native rolling updates, scheduling across nodes, etc.). If you need:

…you’ll likely use Kubernetes/ECS/Nomad.

If you do use Compose in production, avoid dev conveniences:

Example production-ish Compose snippet:

services:
  web:
    image: myorg/myapp:1.4.2
    ports:
      - "8080:8080"
    environment:
      NODE_ENV: production
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:8080/health"]
      interval: 10s
      timeout: 2s
      retries: 5

Deploy:

docker compose pull
docker compose up -d

6.3 Profiles and overrides

Compose profiles let you include optional services:

docker compose --profile tools up

You can also use multiple files:

docker compose -f compose.yaml -f compose.prod.yaml up -d

A common pattern:

Be careful: overrides can create “it works locally” drift if production never runs the same service definitions.


7. Environment configuration: secrets, env vars, and config files

Local development often uses .env files:

# .env (local only)
DATABASE_URL=postgres://app:app@localhost:5432/app

Compose automatically loads .env in the project directory.

Production guidance:

Inspect environment variables of a running container:

docker exec -it myapp env | sort

If you must pass env vars at runtime:

docker run --rm \
  --env-file .env.production \
  myorg/myapp:1.4.2

For secrets, consider platform-specific secret stores (Kubernetes Secrets, AWS Secrets Manager, etc.). If you’re using plain Docker, you can at least avoid putting secrets in the image by mounting files:

docker run --rm \
  -v "$PWD/secrets/db_password.txt:/run/secrets/db_password:ro" \
  myorg/myapp:1.4.2

Then your app reads /run/secrets/db_password.


8. Networking differences and common pitfalls

Local Compose gives you a built-in DNS: service names resolve automatically.

From web, the hostname db resolves to the Postgres container. That’s why:

postgres://app:app@db:5432/app

works in Compose.

Pitfalls:

Test connectivity from inside a container:

docker compose exec web sh -lc "apk add --no-cache curl && curl -sS http://web:8080/health"

(Installing packages inside a running container is fine for local debugging, but don’t rely on it in production.)


9. Build and release workflow

9.1 Local build commands

Build a production image locally:

docker build --target prod -t myorg/myapp:local-prod .

Run it:

docker run --rm -p 8080:8080 myorg/myapp:local-prod

Inspect layers/history:

docker history myorg/myapp:local-prod

Inspect image metadata:

docker inspect myorg/myapp:local-prod | less

9.2 CI build commands and provenance

In CI, you want:

Build and push:

docker build --target prod -t registry.example.com/myorg/myapp:1.4.2 .
docker push registry.example.com/myorg/myapp:1.4.2

If you build multi-arch images (amd64/arm64), use buildx:

docker buildx create --use
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  --target prod \
  -t registry.example.com/myorg/myapp:1.4.2 \
  --push .

9.3 Tagging strategy

Avoid relying on latest in production. Prefer:

Example:

# Tag with git SHA
GIT_SHA=$(git rev-parse --short HEAD)
docker tag myorg/myapp:local-prod registry.example.com/myorg/myapp:git-$GIT_SHA
docker push registry.example.com/myorg/myapp:git-$GIT_SHA

10. Data and migrations: dev convenience vs prod safety

In local dev, it’s common to run migrations automatically on startup. In production, this can be risky:

Local approach (convenient):

Production approach (safer):

One-off migration run with Compose:

docker compose run --rm web npm run migrate

One-off migration run with plain Docker:

docker run --rm \
  -e DATABASE_URL="postgres://..." \
  myorg/myapp:1.4.2 \
  npm run migrate

Trade-off summary:


11. Anti-patterns to avoid

  1. Using the same container image for dev and prod without stages

    • You’ll ship debuggers, compilers, and dev dependencies to production.
  2. Bind mounting in production

    • Makes deployments non-reproducible and couples runtime to host filesystem layout.
  3. Storing persistent data inside the container filesystem

    • Containers are ephemeral; you will lose data on reschedule/recreate.
  4. Running as root by default

    • Increases blast radius of container escape or app compromise.
  5. Relying on latest

    • You can’t reliably roll back or audit what changed.
  6. “Fixing” production by exec-ing into containers

    • Creates configuration drift and undocumented changes.
  7. Assuming localhost means the same thing everywhere

    • Local host vs container network vs cluster networking differ.

12. A reference architecture you can adapt

A pragmatic approach used by many teams:

Development architecture

Commands:

# Start everything
docker compose up --build

# Reset DB when needed
docker compose down -v
docker compose up -d db

Production architecture

A minimal “prod-like” local smoke test:

# Build prod image
docker build --target prod -t myapp:smoke .

# Run with minimal env
docker run --rm -p 8080:8080 -e NODE_ENV=production myapp:smoke

This catches issues like:


Closing guidance: choose compatibility over sameness

The most resilient Docker strategy is:

Use multi-stage builds to keep one source of truth while producing environment-appropriate images. Use Compose to make local onboarding trivial, but don’t assume Compose semantics match your production orchestrator. Finally, build a habit of running a prod-like smoke test locally or in CI so you catch drift early—without sacrificing developer productivity.

If you share your stack (language/framework, whether you use Kubernetes, and your OS), I can provide a tailored Dockerfile + Compose setup optimized for both dev speed and production hardening.