Docker for Local Development vs Production: Architecture and Workflow Trade-offs
Docker is often introduced as “it works on my machine” insurance, but the real value (and the real pain) comes from deciding how you use containers in different environments. A setup that feels perfect for local development can be slow, insecure, or operationally awkward in production. Conversely, a production-optimized image can be miserable to iterate on locally.
This tutorial explains the architectural and workflow trade-offs between local development and production Docker usage, with concrete patterns, real commands, and practical examples. The goal is to help you design a container strategy that is fast for developers, reliable for operations, and consistent across environments without forcing them to be identical.
Table of Contents
- 1. Mental model: what changes between dev and prod
- 2. Images vs containers vs volumes: the moving parts
- 3. What “good” looks like locally
- 4. What “good” looks like in production
- 5. Dockerfile patterns: dev vs prod
- 6. Docker Compose patterns: dev vs prod
- 7. Environment configuration: secrets, env vars, and config files
- 8. Networking differences and common pitfalls
- 9. Build and release workflow
- 10. Data and migrations: dev convenience vs prod safety
- 11. Anti-patterns to avoid
- 12. A reference architecture you can adapt
1. Mental model: what changes between dev and prod
The biggest misconception is that dev and prod should be identical. They should be compatible, not identical.
What changes:
-
Speed vs certainty
- Local dev optimizes for iteration speed and introspection.
- Production optimizes for deterministic deployments and minimal moving parts.
-
Mutability vs immutability
- Local containers often mutate: you install tools, run shells, tweak configs.
- Production containers should be immutable artifacts: what you run is what you built.
-
Trust boundary
- Locally, the developer is trusted and has full access.
- In production, assume compromise: least privilege, minimal packages, non-root, read-only FS where possible.
-
State
- Local: state is often disposable and frequently reset.
- Production: state is precious; volumes, backups, migrations, and retention matter.
-
Orchestration
- Local: Docker Compose is common, single host, predictable.
- Production: Kubernetes, ECS, Nomad, Swarm, or managed platforms; scheduling, health checks, rolling updates.
A useful principle:
Dev containers are tools. Production containers are products.
2. Images vs containers vs volumes: the moving parts
You’ll make better trade-offs if you keep these concepts distinct:
- Image: a build artifact (layers + metadata). Should be reproducible.
- Container: a running instance of an image (plus runtime config).
- Volume: persistent data managed by Docker (or bind-mounted).
Key commands:
# List images
docker images
# List containers (running)
docker ps
# List all containers
docker ps -a
# List volumes
docker volume ls
In local dev, you’ll often rely on:
- Bind mounts: map your working directory into the container for live reload.
- Anonymous or named volumes: keep DB data between restarts.
In production, you’ll prefer:
- Images that contain the app code (no bind mounts).
- Explicit volumes or managed storage for stateful services (or external managed DB).
3. What “good” looks like locally
3.1 Fast feedback loops
Local development success is measured in seconds:
- Start the stack quickly
- Change code and see results immediately
- Debug with shells, logs, and profilers
Common local workflow:
# Build and start
docker compose up --build
# Follow logs for one service
docker compose logs -f web
# Open a shell in a running container
docker compose exec web sh
If rebuilds are slow, developers will bypass Docker. That defeats the goal of consistency.
3.2 Bind mounts vs baked-in code
Bind mounts are the default for local iteration:
- Pros: instant code changes, no rebuild required
- Cons: file permission quirks, slower I/O on macOS/Windows, can mask missing files that would exist in a real image
Example (Compose snippet):
services:
web:
volumes:
- ./src:/app/src
Baked-in code (copying source into the image) is closer to production:
- Pros: realistic artifact, consistent filesystem, catches missing build steps
- Cons: every code change requires rebuild (unless you use advanced caching and incremental builds)
A common compromise:
- Use bind mounts for day-to-day coding
- Have a “prod-like” build target you run in CI and occasionally locally
3.3 Dev dependencies and debugging tools
Local containers often include:
- Hot reload tooling
- Debuggers (e.g.,
delve,debugpy,node --inspect) - Linters and formatters
- Package managers and compilers
This is fine locally, but these tools enlarge images and increase attack surface in production.
You can explicitly install dev-only tools in a dev stage:
# syntax=docker/dockerfile:1
FROM node:22-alpine AS dev
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["npm", "run", "dev"]
Then have a separate production stage with fewer dependencies (shown later).
3.4 Local databases and state
Local stacks often include Postgres/Redis/etc. via Compose:
docker compose up -d db redis
You’ll typically want:
- A named volume for DB persistence
- A way to reset state quickly
Commands:
# Reset everything including volumes (DANGEROUS: deletes DB data)
docker compose down -v
# Remove dangling volumes not used by any container
docker volume prune
Local trade-off: persistence improves convenience; easy resets improve test reliability.
4. What “good” looks like in production
4.1 Immutability and repeatability
In production, you want:
- The container image built once in CI
- Deployed many times without modification
- No “SSH into the container and fix it” culture
A production container should start from docker run with only runtime configuration:
docker run -d \
--name myapp \
-p 8080:8080 \
-e NODE_ENV=production \
myorg/myapp:1.4.2
If you need to “patch” something, you rebuild and redeploy.
4.2 Security posture
Production images should:
- Run as non-root
- Contain only runtime dependencies
- Avoid shells and package managers if possible
- Use minimal base images (but not at the cost of debuggability you truly need)
- Pin dependencies and base image versions
Example hardening steps:
- Create a non-root user
- Use
COPY --chown - Drop Linux capabilities (runtime)
- Read-only filesystem (runtime) when feasible
Runtime example:
docker run -d \
--read-only \
--tmpfs /tmp:rw,noexec,nosuid,size=64m \
--cap-drop ALL \
--security-opt no-new-privileges \
-p 8080:8080 \
myorg/myapp:1.4.2
Not every app can run fully read-only, but it’s a useful target.
4.3 Observability and operability
Production containers should:
- Log to stdout/stderr (not files inside the container)
- Expose health endpoints
- Support graceful shutdown (SIGTERM)
- Provide metrics endpoints if applicable
Logging rule of thumb:
If logs are written to
/var/loginside the container, you’re probably doing it wrong.
Check logs:
docker logs -f myapp
4.4 Scaling and orchestration assumptions
Local Compose is a single machine. Production might include:
- Multiple replicas
- Rolling updates
- Service discovery
- Ingress/load balancers
- Separate networks and security groups
This affects your architecture:
- Don’t store session state in memory unless you accept sticky sessions or use a shared store.
- Don’t write durable data to the container filesystem.
- Don’t assume container hostname stability.
5. Dockerfile patterns: dev vs prod
5.1 Single Dockerfile with multi-stage targets
This is often the best balance: one Dockerfile, multiple targets.
Example for a Node.js web app:
# syntax=docker/dockerfile:1
FROM node:22-alpine AS base
WORKDIR /app
COPY package*.json ./
FROM base AS deps
RUN npm ci
FROM deps AS dev
COPY . .
ENV NODE_ENV=development
CMD ["npm", "run", "dev"]
FROM deps AS build
COPY . .
RUN npm run build
FROM node:22-alpine AS prod
WORKDIR /app
ENV NODE_ENV=production
# Create non-root user
RUN addgroup -S app && adduser -S app -G app
COPY --from=build /app/dist /app/dist
COPY --from=deps /app/node_modules /app/node_modules
COPY package*.json ./
USER app
EXPOSE 8080
CMD ["node", "dist/server.js"]
Build and run dev target:
docker build --target dev -t myapp:dev .
docker run --rm -p 8080:8080 myapp:dev
Build and run prod target:
docker build --target prod -t myapp:prod .
docker run --rm -p 8080:8080 myapp:prod
Trade-offs:
- Pros: one canonical build definition; shared caching; less drift
- Cons: Dockerfile becomes more complex; dev stage may still need bind mounts for best experience
5.2 Separate Dockerfiles
You might use:
Dockerfilefor productionDockerfile.devfor local development
This can be simpler for teams early on, but it risks drift:
- Different base images
- Different OS packages
- Different working directories
- Different entrypoints
If you go this route, keep them intentionally aligned and review both in PRs.
Build with a custom Dockerfile:
docker build -f Dockerfile.dev -t myapp:dev .
5.3 Caching and layer strategy
Build performance matters in both dev and CI.
Key idea: copy dependency manifests first, install deps, then copy source.
Bad (invalidates cache on every change):
COPY . .
RUN npm ci
Better:
COPY package*.json ./
RUN npm ci
COPY . .
Use BuildKit for better performance:
DOCKER_BUILDKIT=1 docker build -t myapp:prod .
6. Docker Compose patterns: dev vs prod
6.1 A practical Compose dev setup
A typical dev stack includes:
webservice with bind mounts and hot reloaddbservice with a named volumeredisservice- Optional admin tools (e.g., pgAdmin) behind a profile
Example compose.yaml:
services:
web:
build:
context: .
target: dev
ports:
- "8080:8080"
environment:
DATABASE_URL: postgres://app:app@db:5432/app
REDIS_URL: redis://redis:6379
volumes:
- ./:/app
- /app/node_modules
depends_on:
- db
- redis
command: ["npm", "run", "dev"]
db:
image: postgres:16
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: app
POSTGRES_DB: app
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
db_data:
Notes on the volumes:
./:/appbind-mounts your repo for live editing./app/node_modulesas an anonymous volume prevents your hostnode_modulesfrom overwriting container-installed modules (common Node pattern).
Run it:
docker compose up --build
Recreate a single service:
docker compose up -d --no-deps --build web
6.2 Production Compose: when it fits and when it doesn’t
Compose can be used in production for:
- Single-host deployments
- Small internal services
- Edge devices
- Simple VPS setups
But it lacks many orchestration features (native rolling updates, scheduling across nodes, etc.). If you need:
- Multi-node scaling
- Automated failover
- Advanced deployment strategies
…you’ll likely use Kubernetes/ECS/Nomad.
If you do use Compose in production, avoid dev conveniences:
- No bind mounts for app code
- Use immutable image tags
- Use restart policies
- Use healthchecks
- Use external secrets management if possible
Example production-ish Compose snippet:
services:
web:
image: myorg/myapp:1.4.2
ports:
- "8080:8080"
environment:
NODE_ENV: production
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8080/health"]
interval: 10s
timeout: 2s
retries: 5
Deploy:
docker compose pull
docker compose up -d
6.3 Profiles and overrides
Compose profiles let you include optional services:
docker compose --profile tools up
You can also use multiple files:
docker compose -f compose.yaml -f compose.prod.yaml up -d
A common pattern:
compose.yamlfor dev defaultscompose.override.yamlauto-applied locally (Docker Compose does this by default)compose.prod.yamlexplicitly used in production
Be careful: overrides can create “it works locally” drift if production never runs the same service definitions.
7. Environment configuration: secrets, env vars, and config files
Local development often uses .env files:
# .env (local only)
DATABASE_URL=postgres://app:app@localhost:5432/app
Compose automatically loads .env in the project directory.
Production guidance:
- Prefer environment variables injected by your platform
- Don’t bake secrets into images
- Avoid committing
.envwith real credentials
Inspect environment variables of a running container:
docker exec -it myapp env | sort
If you must pass env vars at runtime:
docker run --rm \
--env-file .env.production \
myorg/myapp:1.4.2
For secrets, consider platform-specific secret stores (Kubernetes Secrets, AWS Secrets Manager, etc.). If you’re using plain Docker, you can at least avoid putting secrets in the image by mounting files:
docker run --rm \
-v "$PWD/secrets/db_password.txt:/run/secrets/db_password:ro" \
myorg/myapp:1.4.2
Then your app reads /run/secrets/db_password.
8. Networking differences and common pitfalls
Local Compose gives you a built-in DNS: service names resolve automatically.
From web, the hostname db resolves to the Postgres container. That’s why:
postgres://app:app@db:5432/app
works in Compose.
Pitfalls:
- Using
localhostinside containers: insideweb,localhostrefers toweb, not your host. - Port publishing vs internal ports:
ports: "5432:5432"publishes to host for tools likepsqlon your machine.- Containers can talk to
db:5432without publishing.
Test connectivity from inside a container:
docker compose exec web sh -lc "apk add --no-cache curl && curl -sS http://web:8080/health"
(Installing packages inside a running container is fine for local debugging, but don’t rely on it in production.)
9. Build and release workflow
9.1 Local build commands
Build a production image locally:
docker build --target prod -t myorg/myapp:local-prod .
Run it:
docker run --rm -p 8080:8080 myorg/myapp:local-prod
Inspect layers/history:
docker history myorg/myapp:local-prod
Inspect image metadata:
docker inspect myorg/myapp:local-prod | less
9.2 CI build commands and provenance
In CI, you want:
- Clean environment
- Reproducible builds
- Pinned base images (or at least tracked updates)
- Vulnerability scanning (outside scope, but important)
Build and push:
docker build --target prod -t registry.example.com/myorg/myapp:1.4.2 .
docker push registry.example.com/myorg/myapp:1.4.2
If you build multi-arch images (amd64/arm64), use buildx:
docker buildx create --use
docker buildx build \
--platform linux/amd64,linux/arm64 \
--target prod \
-t registry.example.com/myorg/myapp:1.4.2 \
--push .
9.3 Tagging strategy
Avoid relying on latest in production. Prefer:
- Semantic version tags:
1.4.2 - Git SHA tags:
git-<sha> - Environment promotion tags (carefully):
staging,prod(mutable tags can be OK if you treat them as pointers)
Example:
# Tag with git SHA
GIT_SHA=$(git rev-parse --short HEAD)
docker tag myorg/myapp:local-prod registry.example.com/myorg/myapp:git-$GIT_SHA
docker push registry.example.com/myorg/myapp:git-$GIT_SHA
10. Data and migrations: dev convenience vs prod safety
In local dev, it’s common to run migrations automatically on startup. In production, this can be risky:
- Multiple replicas might race
- A bad migration can take down the service
- You may want controlled rollout and backups
Local approach (convenient):
webcontainer runsmigrate && start
Production approach (safer):
- Run migrations as a separate one-off job
- Deploy app after successful migration (or use backwards-compatible migrations)
One-off migration run with Compose:
docker compose run --rm web npm run migrate
One-off migration run with plain Docker:
docker run --rm \
-e DATABASE_URL="postgres://..." \
myorg/myapp:1.4.2 \
npm run migrate
Trade-off summary:
- Dev: optimize for “I pulled the repo and it works”
- Prod: optimize for controlled change and rollback options
11. Anti-patterns to avoid
-
Using the same container image for dev and prod without stages
- You’ll ship debuggers, compilers, and dev dependencies to production.
-
Bind mounting in production
- Makes deployments non-reproducible and couples runtime to host filesystem layout.
-
Storing persistent data inside the container filesystem
- Containers are ephemeral; you will lose data on reschedule/recreate.
-
Running as root by default
- Increases blast radius of container escape or app compromise.
-
Relying on
latest- You can’t reliably roll back or audit what changed.
-
“Fixing” production by exec-ing into containers
- Creates configuration drift and undocumented changes.
-
Assuming
localhostmeans the same thing everywhere- Local host vs container network vs cluster networking differ.
12. A reference architecture you can adapt
A pragmatic approach used by many teams:
Development architecture
- Compose stack with:
webbuilt from--target dev- bind mount source code
- hot reload
- local
db,redis - optional tools via profiles
Commands:
# Start everything
docker compose up --build
# Reset DB when needed
docker compose down -v
docker compose up -d db
Production architecture
- CI builds
--target prodimage - Image pushed to registry with immutable tag
- Deployment system pulls image and runs it with:
- environment variables/secrets injected
- health checks enabled
- non-root user
- no bind mounts for app code
- Databases are managed services or dedicated stateful deployments with backups
A minimal “prod-like” local smoke test:
# Build prod image
docker build --target prod -t myapp:smoke .
# Run with minimal env
docker run --rm -p 8080:8080 -e NODE_ENV=production myapp:smoke
This catches issues like:
- Missing build artifacts (
dist/not copied) - Incorrect entrypoints
- Runtime-only dependencies missing
- Permission problems when running as non-root
Closing guidance: choose compatibility over sameness
The most resilient Docker strategy is:
- Local dev: containers as a flexible toolchain (fast, inspectable, bind-mounted, debuggable)
- Production: containers as immutable artifacts (minimal, secure, observable, repeatable)
Use multi-stage builds to keep one source of truth while producing environment-appropriate images. Use Compose to make local onboarding trivial, but don’t assume Compose semantics match your production orchestrator. Finally, build a habit of running a prod-like smoke test locally or in CI so you catch drift early—without sacrificing developer productivity.
If you share your stack (language/framework, whether you use Kubernetes, and your OS), I can provide a tailored Dockerfile + Compose setup optimized for both dev speed and production hardening.