Hardening Dockerfiles: Reduce Image Size and Security Risks Without Breaking Builds
Dockerfiles are both a build script and a supply-chain boundary. Small choices—base image, package manager flags, user permissions, file ownership, and caching strategy—directly affect:
- Image size (pull speed, storage cost, CI time)
- Attack surface (fewer packages, fewer CVEs, fewer tools for attackers)
- Reproducibility (stable builds, fewer “works on my machine” surprises)
- Operational safety (non-root containers, read-only filesystems, least privilege)
This tutorial shows practical, build-safe hardening patterns with real commands, and explains why each step matters.
Table of contents
- 1. Core principles
- 2. Choose the right base image (and pin it)
- 3. Reduce layers and keep caches under control
- 4. Safer package installation (APT/APK) without breaking builds
- 5. Multi-stage builds: the biggest win for size and security
- 6. Run as non-root (correctly)
- 7. File permissions, ownership, and immutable runtime files
- 8. Secrets: never bake them into images
- 9. Supply-chain hardening: pinning, checksums, SBOMs, provenance
- 10. Healthchecks and minimal runtime tooling
- 11. .dockerignore: stop leaking and speed up builds
- 12. Practical examples (Node, Python, Go)
- 13. Verification: measure size, scan vulnerabilities, test behavior
- 14. A hardening checklist you can apply today
1. Core principles
Minimize what you ship
Every file in the final image is something you must patch and defend. Prefer “runtime-only” images that contain:
- your compiled artifacts or installed dependencies
- minimal runtime libraries
- configuration defaults (non-secret)
- a non-root user
Make builds reproducible
Reproducibility reduces “surprise” drift and makes security fixes deliberate. Key techniques:
- pin base images by digest
- pin dependency versions (language and OS packages where feasible)
- avoid
latest - avoid downloading unverified binaries
Separate build-time and runtime concerns
Build tools (compilers, package managers, headers) are high-risk and large. Use multi-stage builds so they never end up in production.
2. Choose the right base image (and pin it)
Prefer smaller, purpose-built bases
Common options:
- Debian slim: good compatibility, moderate size
- Alpine: small, but musl libc can break some binaries; not always worth it
- Distroless: very small and locked down; no shell/package manager
- Scratch: empty; best for static binaries (Go), but requires care
A safe default for many apps is Debian slim or distroless.
Pin by digest, not tag
Tags can move. Digests are immutable.
docker pull debian:bookworm-slim
docker inspect --format='{{index .RepoDigests 0}}' debian:bookworm-slim
In your Dockerfile:
FROM debian:bookworm-slim@sha256:REPLACE_WITH_REAL_DIGEST
Why this matters: If bookworm-slim gets rebuilt, the digest changes. Pinning prevents silent changes that can break builds or introduce new vulnerabilities unexpectedly.
3. Reduce layers and keep caches under control
Each Dockerfile instruction generally creates a layer. Layers add overhead and can preserve temporary build files if you’re not careful.
Combine related operations into one RUN
Bad (leaves APT cache in earlier layer):
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*
Better:
RUN apt-get update \
&& apt-get install -y --no-install-recommends curl \
&& rm -rf /var/lib/apt/lists/*
Use BuildKit cache mounts for speed (without bloating the image)
With BuildKit enabled, you can cache package downloads between builds without storing them in the final image.
Enable BuildKit:
export DOCKER_BUILDKIT=1
Example with APT cache mount:
# syntax=docker/dockerfile:1.7
RUN --mount=type=cache,target=/var/cache/apt \
--mount=type=cache,target=/var/lib/apt \
apt-get update && apt-get install -y --no-install-recommends ca-certificates
Why this matters: You get fast builds and clean final layers.
4. Safer package installation (APT/APK) without breaking builds
APT best practices (Debian/Ubuntu)
- Always run
apt-get updateandapt-get installin the same layer - Use
--no-install-recommendsto avoid pulling large optional packages - Clean
/var/lib/apt/lists/*afterward - Prefer
ca-certificatesfor TLS downloads - Avoid interactive prompts
Example:
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
&& rm -rf /var/lib/apt/lists/*
If you need noninteractive installs:
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y --no-install-recommends tzdata \
&& rm -rf /var/lib/apt/lists/*
APK best practices (Alpine)
RUN apk add --no-cache ca-certificates curl
--no-cache prevents /var/cache/apk from being stored in the image.
Avoid “curl | sh” installers
This is a common supply-chain footgun. If you must download binaries, verify checksums and pin versions.
5. Multi-stage builds: the biggest win for size and security
Multi-stage builds let you compile/build in one stage and copy only the artifacts into a minimal runtime stage.
Example: building a Node app with a clean runtime
A common mistake is shipping build-essential, git, and caches in production images. Multi-stage avoids that.
# syntax=docker/dockerfile:1.7
FROM node:22-bookworm AS build
WORKDIR /app
# Install dependencies first for better caching
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
# Copy source and build
COPY . .
RUN npm run build
# --- Runtime stage ---
FROM node:22-bookworm-slim AS runtime
WORKDIR /app
ENV NODE_ENV=production
# Copy only what you need
COPY --from=build /app/package.json /app/package-lock.json ./
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
# Drop privileges (we’ll refine this later)
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]
Hardening gains:
- Build tools and caches stay in the build stage
- Runtime image is smaller
- Fewer packages → fewer CVEs
Build command:
docker build -t myapp:hardended .
6. Run as non-root (correctly)
Running as root inside the container is still dangerous:
- container escapes are rare, but misconfigurations happen
- root can write to mounted volumes, modify binaries, or read secrets
- many Kubernetes environments assume non-root
Create a dedicated user and group
On Debian-based images:
RUN groupadd -r app && useradd -r -g app -d /app -s /usr/sbin/nologin app
On Alpine:
RUN addgroup -S app && adduser -S -G app -h /app app
Then:
USER app
Ensure files are owned appropriately
If you copy files as root and then switch to a non-root user, your app may fail to write logs, caches, or temp files.
Use COPY --chown:
COPY --chown=app:app . /app
Or set ownership after copying:
RUN chown -R app:app /app
Prefer COPY --chown because it avoids extra layers and is more explicit.
7. File permissions, ownership, and immutable runtime files
Hardening is not only “non-root”; it’s also controlling what the process can modify.
Make the filesystem mostly read-only
At runtime (Docker CLI), you can enforce read-only root filesystem:
docker run --read-only --tmpfs /tmp:rw,noexec,nosuid,size=64m myapp:hardended
Your image should support this by writing only to known writable paths (/tmp, /var/run, app-specific directories).
Avoid writable application code
If attackers can write to your code directory, they can persist. Prefer:
- application code owned by root and readable by app user
- writable directories explicitly created for runtime state
Example:
# Create writable dirs
RUN mkdir -p /app/run /app/tmp \
&& chown -R app:app /app/run /app/tmp \
&& chmod 700 /app/run /app/tmp
Use umask or explicit permissions for sensitive files
If your app generates files (e.g., tokens, local DB), ensure restrictive permissions.
8. Secrets: never bake them into images
Don’t do this
ENV DATABASE_PASSWORD=supersecret
Or copying .env:
COPY .env /app/.env
Images are often pushed to registries, cached in CI, and shared across environments. Secrets in images are extremely hard to rotate safely.
Use runtime injection
- Docker:
--env,--env-file - Swarm/Kubernetes: secrets mechanisms
- BuildKit secrets for build-time needs (private registries)
Build-time secret example (BuildKit):
# syntax=docker/dockerfile:1.7
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
npm ci
Build command:
docker build \
--secret id=npmrc,src=$HOME/.npmrc \
-t myapp:secure .
Why this matters: The secret is not stored in any layer.
9. Supply-chain hardening: pinning, checksums, SBOMs, provenance
Pin language dependencies
- Node:
package-lock.json+npm ci - Python:
requirements.txtwith pinned versions, orpoetry.lock - Go:
go.sum
Example Node:
COPY package.json package-lock.json ./
RUN npm ci --ignore-scripts
--ignore-scripts can reduce risk from malicious postinstall scripts, but may break builds for packages that require them. Use it if compatible, or selectively allow scripts.
Verify downloaded artifacts with checksums
If you must download a tarball:
ARG TOOL_VERSION=1.2.3
ARG TOOL_SHA256=REPLACE_WITH_REAL_SHA256
RUN curl -fsSLo /tmp/tool.tgz "https://example.com/tool-${TOOL_VERSION}.tgz" \
&& echo "${TOOL_SHA256} /tmp/tool.tgz" | sha256sum -c - \
&& tar -C /usr/local/bin -xzf /tmp/tool.tgz \
&& rm -f /tmp/tool.tgz
Generate an SBOM (Software Bill of Materials)
SBOMs help you answer: “What’s inside this image?” and speed up incident response.
Using Syft:
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
anchore/syft:latest myapp:hardended -o spdx-json > sbom.spdx.json
Sign images (optional but recommended)
Cosign example:
cosign sign --key cosign.key myregistry.example.com/myapp:hardended
cosign verify --key cosign.pub myregistry.example.com/myapp:hardended
10. Healthchecks and minimal runtime tooling
A healthcheck can prevent broken containers from receiving traffic, but be careful: adding curl just for healthchecks increases size and CVEs.
Prefer app-native health endpoints
If your app exposes /health, you can use a tiny tool if available, or rely on orchestrator checks (Kubernetes probes) instead of baking tools into the image.
Docker healthcheck example (if wget exists):
HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \
CMD wget -qO- http://127.0.0.1:3000/health || exit 1
If you’re using distroless (no shell/wget), do healthchecks at the platform level (Kubernetes liveness/readiness probes) rather than inside the image.
11. .dockerignore: stop leaking and speed up builds
The Docker build context is everything sent to the daemon. If you accidentally include:
.git/node_modules/.env- build outputs
- SSH keys
…you risk leaks and slow builds.
Example .dockerignore:
.git
.gitignore
Dockerfile
docker-compose.yml
node_modules
npm-debug.log
dist
build
.env
*.pem
*.key
coverage
*.swp
.DS_Store
Why this matters: Even if you never COPY a file, it can still be present in the build context and accidentally included later or exposed via missteps.
12. Practical examples (Node, Python, Go)
Below are hardened templates you can adapt.
12.1 Node.js (multi-stage, non-root, slim runtime)
# syntax=docker/dockerfile:1.7
FROM node:22-bookworm AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
COPY . .
RUN npm run build
FROM node:22-bookworm-slim AS runtime
WORKDIR /app
ENV NODE_ENV=production
# Install only CA certs if you do outbound TLS
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Create non-root user
RUN groupadd -r app && useradd -r -g app -d /app -s /usr/sbin/nologin app
# Copy runtime artifacts with correct ownership
COPY --from=build --chown=app:app /app/package.json /app/package-lock.json ./
COPY --from=build --chown=app:app /app/node_modules ./node_modules
COPY --from=build --chown=app:app /app/dist ./dist
# Optional: create writable dirs for tmp/runtime state
RUN mkdir -p /app/tmp \
&& chown -R app:app /app/tmp \
&& chmod 700 /app/tmp
USER app
EXPOSE 3000
CMD ["node", "dist/server.js"]
Build and run:
docker build -t nodeapp:secure .
docker run --rm -p 3000:3000 --read-only --tmpfs /app/tmp:rw,size=64m nodeapp:secure
12.2 Python (wheels in builder, minimal runtime)
Key idea: build wheels in a builder stage (with compilers) and install them into a clean runtime stage.
# syntax=docker/dockerfile:1.7
FROM python:3.13-slim AS build
WORKDIR /w
RUN apt-get update \
&& apt-get install -y --no-install-recommends build-essential \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
pip wheel --no-deps -r requirements.txt -w /wheels
FROM python:3.13-slim AS runtime
WORKDIR /app
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd -r app && useradd -r -g app -d /app -s /usr/sbin/nologin app
COPY --from=build /wheels /wheels
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --no-cache-dir --no-deps /wheels/* \
&& rm -rf /wheels
COPY --chown=app:app . /app
USER app
CMD ["python", "-m", "your_module"]
Build:
docker build -t pyapp:secure .
Notes:
pip install --no-cache-diravoids keeping pip caches in the image.- If you need system libraries (e.g.,
libpq), install only runtime libs in the runtime stage, not-devpackages.
12.3 Go (static binary + distroless)
Go is ideal for minimal images because you can compile a static binary and run it in distroless.
# syntax=docker/dockerfile:1.7
FROM golang:1.24-bookworm AS build
WORKDIR /src
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -trimpath -ldflags="-s -w" -o /out/app ./cmd/app
FROM gcr.io/distroless/static-debian12:nonroot AS runtime
COPY --from=build /out/app /app
EXPOSE 8080
USER nonroot:nonroot
ENTRYPOINT ["/app"]
Build and run:
docker build -t goapp:secure .
docker run --rm -p 8080:8080 --read-only goapp:secure
Why this is hardened:
- distroless has no shell and minimal packages
- nonroot user by default
- static binary reduces dependency complexity
-trimpathand-ldflags="-s -w"reduce size and remove debug info
13. Verification: measure size, scan vulnerabilities, test behavior
Hardening isn’t complete until you verify outcomes.
Measure image size and layers
docker images myapp:hardended
docker history --no-trunc myapp:hardended
Scan for vulnerabilities
Using Trivy:
trivy image myapp:hardended
If you want to fail CI on high/critical:
trivy image --exit-code 1 --severity HIGH,CRITICAL myapp:hardended
Confirm you’re not running as root
docker run --rm myapp:hardended id
You want a non-zero UID (or a known non-root user).
Test read-only filesystem compatibility
docker run --rm --read-only --tmpfs /tmp:rw,size=64m myapp:hardended
If it crashes, your app is writing somewhere unexpected. Fix by:
- writing to
/tmp - creating a dedicated writable directory and mounting it
- adjusting permissions
14. A hardening checklist you can apply today
Use this as a practical “diff guide” for improving existing Dockerfiles.
Base image
- Use a minimal base that still supports your app (slim/distroless where possible)
- Pin
FROMby digest for reproducibility - Avoid
latest
Packages
-
apt-get update && apt-get installin one layer - Use
--no-install-recommends - Remove APT lists:
rm -rf /var/lib/apt/lists/* - Avoid installing tools you don’t need at runtime
Multi-stage
- Build tools only in builder stage
- Copy only artifacts into runtime stage
Users and permissions
- Create a dedicated user/group
- Use
COPY --chownto avoid permission issues - Ensure runtime writable dirs are explicit and minimal
- Run as non-root
Secrets
- No secrets in
ENV,ARG, or files copied into image - Use BuildKit secrets for build-time auth
- Inject secrets at runtime via orchestrator
Supply chain
- Lock dependencies (
npm ci, lockfiles, pinned versions) - Verify checksums for downloaded binaries
- Generate SBOMs and consider signing images
Runtime hardening (outside the Dockerfile, but essential)
- Run with
--read-onlywhere possible - Drop Linux capabilities (example below)
- Set memory/CPU limits
- Use seccomp/AppArmor profiles (defaults help)
Capability dropping example:
docker run --rm \
--cap-drop=ALL \
--security-opt no-new-privileges \
-p 3000:3000 \
myapp:hardended
(Some apps need specific caps; add back only what’s required.)
Closing: hardening without breaking builds
The safest hardening changes are the ones that preserve developer velocity:
- Start with multi-stage builds (biggest size/security win).
- Switch to non-root and fix permissions using
COPY --chown. - Clean package manager caches and avoid recommended packages.
- Add verification steps: scan,
id, read-only test. - Gradually adopt stronger supply-chain controls: digests, checksums, SBOM, signing.
If you share your current Dockerfile and target runtime (Docker Compose, Kubernetes, ECS, etc.), you can apply these patterns with minimal disruption and get a concrete, hardened rewrite.