From Docker to Kubernetes: A Practical Migration Path for Existing Compose Projects
Migrating from Docker Compose to Kubernetes is less about “rewriting everything” and more about translating the operational intent of your Compose file into Kubernetes primitives: deployments, services, config, secrets, storage, and ingress. This tutorial walks a pragmatic path you can follow for real-world Compose projects—starting with what you already have, getting something running quickly, then hardening it into a maintainable Kubernetes setup.
You’ll see real commands, concrete translation patterns, and the “why” behind each Kubernetes concept so you can make good decisions rather than copy/paste manifests blindly.
Table of contents
- 1. Prerequisites and mental model
- 2. Understand what Compose is doing today
- 3. Choose a Kubernetes environment
- 4. Quick win: generate Kubernetes manifests from Compose
- 5. The “manual” translation: core building blocks
- 6. A realistic example: migrating a Compose app
- 7. Production hardening checklist
- 8. Migration strategy: incremental steps that reduce risk
- 9. Common pitfalls when moving from Compose
- 10. Where to go next
1. Prerequisites and mental model
Docker Compose is primarily a single-host orchestrator. Even when you run it on a VM in the cloud, the underlying assumption is: one machine, one Docker daemon, one network namespace, and local disks.
Kubernetes is a cluster orchestrator. It assumes:
- workloads can move between nodes
- networking is virtualized and routable across nodes
- storage may be remote/dynamic
- desired state is continuously reconciled
The key mental shift:
In Compose you “start containers”. In Kubernetes you declare the desired state, and controllers keep it true.
Tools you’ll use
Install these locally:
kubectl version --client
docker version
Optional but highly recommended:
helm(package manager for Kubernetes)k9s(terminal UI)kindorminikube(local cluster)kompose(Compose-to-Kubernetes converter)
2. Understand what Compose is doing today
Before migrating, inventory what your Compose project relies on. Run:
docker compose config
This renders the fully-resolved configuration (after env substitution and merges). Save it:
docker compose config > compose.rendered.yml
Now identify:
- Services: containers you run
- Ports: what is exposed to the host
- Dependencies:
depends_on(note: Kubernetes doesn’t have a direct equivalent) - Environment: config values and secrets
- Volumes: persistent data and shared files
- Networks: usually a single bridge network in Compose
- Build: images built locally vs pulled from a registry
- Healthchecks: container health endpoints/commands
This inventory drives your Kubernetes design.
3. Choose a Kubernetes environment
You need a cluster to migrate into. Common choices:
Local development clusters
kind (Kubernetes in Docker) is excellent for fast local iteration:
kind create cluster --name compose-migration
kubectl cluster-info
minikube is also popular and has add-ons:
minikube start
kubectl get nodes
Managed clusters (production-like)
- GKE (Google Kubernetes Engine)
- EKS (AWS)
- AKS (Azure)
- DigitalOcean Kubernetes, etc.
For a first migration, it’s often best to:
- Validate manifests on
kind - Deploy to a staging cluster that matches production
- Promote to production
4. Quick win: generate Kubernetes manifests from Compose
If your goal is “get it running” quickly, use Kompose to generate a baseline. It won’t produce perfect production manifests, but it’s a useful accelerator.
Install Kompose (example for Linux):
curl -L https://github.com/kubernetes/kompose/releases/download/v1.35.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv kompose /usr/local/bin/kompose
kompose version
Convert:
kompose convert -f docker-compose.yml
ls -1
You’ll typically get files like:
api-deployment.yamlapi-service.yamldb-deployment.yaml(often should become a StatefulSet)redis-deployment.yaml- plus PVCs if volumes exist
Apply to a cluster:
kubectl apply -f .
kubectl get pods
Why you shouldn’t stop here
Kompose can’t infer:
- correct storage class and persistence needs
- readiness vs liveness probes
- security contexts, RBAC, network policies
- proper separation of config and secrets
- ingress and TLS strategy
- production-grade rolling updates
Treat generated manifests as a draft.
5. The “manual” translation: core building blocks
This section explains how Compose concepts map to Kubernetes, and why.
5.1 Deployments vs StatefulSets
Compose service usually becomes a Deployment:
- stateless app containers
- can scale horizontally
- pods can be replaced at any time
Use a StatefulSet when:
- stable network identity matters (e.g.,
postgres-0,postgres-1) - stable persistent volumes per replica are required
- ordered startup/shutdown matters
Rule of thumb:
- web/API workers → Deployment
- databases/queues that store data → StatefulSet (or use a managed service)
5.2 Services: stable networking
In Compose, services discover each other by service name on a shared network.
In Kubernetes:
- Pods have ephemeral IPs
- A Service provides a stable DNS name and virtual IP
- DNS name is typically:
service-name.namespace.svc.cluster.local
Example:
- Compose:
DB_HOST=db - Kubernetes:
DB_HOST=postgres(if Service is namedpostgres)
Service types:
ClusterIP(default): internal onlyNodePort: exposes on node ports (often for dev)LoadBalancer: cloud load balancer (managed clusters)- Ingress: HTTP routing layer (recommended for web apps)
5.3 ConfigMaps and Secrets
Compose commonly uses:
environment:.envfiles- bind-mounted config files
In Kubernetes, best practice is:
- non-sensitive configuration → ConfigMap
- sensitive values (passwords, tokens) → Secret
- mount them as env vars or files
Create a Secret from literal values:
kubectl create secret generic app-secrets \
--from-literal=DATABASE_URL='postgres://app:changeme@postgres:5432/appdb' \
--from-literal=JWT_SECRET='replace-me' \
-n demo
Create a ConfigMap from a file:
kubectl create configmap app-config \
--from-file=./config/app.properties \
-n demo
Important nuance: Kubernetes Secrets are base64-encoded, not encrypted by default. For production, consider:
- encryption at rest (KMS integration)
- external secret managers (AWS Secrets Manager, GCP Secret Manager, Vault)
- sealed-secrets or external-secrets operator
5.4 Storage: volumes, PVCs, and StorageClasses
Compose volumes are often local directories or named volumes. In Kubernetes, storage is abstracted:
- PersistentVolumeClaim (PVC): “I need X GiB with these characteristics”
- PersistentVolume (PV): actual backing storage (often dynamically provisioned)
- StorageClass: defines how dynamic provisioning happens (SSD, HDD, etc.)
In local clusters, default storage is often hostPath-like and not production-grade. In managed clusters, a PVC typically provisions a cloud disk automatically.
Key decision:
- If you can, move stateful components to managed services (managed Postgres, managed Redis).
- If you must run them in-cluster, use StatefulSets + PVCs.
5.5 Health checks: readiness vs liveness
Compose healthcheck is a single concept. Kubernetes splits it:
- readinessProbe: “Should this pod receive traffic?”
- livenessProbe: “Is this pod stuck and should be restarted?”
- startupProbe: “Give it extra time to boot before liveness kicks in”
This split matters a lot. A pod can be alive but not ready (e.g., waiting for migrations).
5.6 Resource requests/limits
Compose rarely enforces resource usage. Kubernetes scheduling depends on it:
- requests: what the scheduler reserves
- limits: hard cap (CPU throttling / OOM kill)
Example:
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "1"
memory: "512Mi"
Without requests, the scheduler can overpack nodes and you’ll see noisy-neighbor issues.
5.7 Ingress: HTTP routing into the cluster
Compose typically uses ports: "8080:8080" to expose services on the host.
In Kubernetes you usually:
- keep app Services internal (
ClusterIP) - use an Ingress Controller (NGINX, Traefik, HAProxy, cloud-native) to route external HTTP(S)
On kind, you can install NGINX ingress (one common approach):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl -n ingress-nginx get pods
6. A realistic example: migrating a Compose app
We’ll migrate a small but realistic stack:
api: a web API containerpostgres: database with persistent storageredis: cache (optional persistence)worker: background job processor using same image as API but different command
6.1 Example Compose file
Here’s a representative docker-compose.yml:
services:
api:
build: ./api
ports:
- "8080:8080"
environment:
DATABASE_HOST: postgres
DATABASE_NAME: appdb
DATABASE_USER: app
DATABASE_PASSWORD: changeme
REDIS_HOST: redis
LOG_LEVEL: info
depends_on:
- postgres
- redis
worker:
build: ./api
command: ["./worker"]
environment:
DATABASE_HOST: postgres
DATABASE_NAME: appdb
DATABASE_USER: app
DATABASE_PASSWORD: changeme
REDIS_HOST: redis
depends_on:
- postgres
- redis
postgres:
image: postgres:16
environment:
POSTGRES_DB: appdb
POSTGRES_USER: app
POSTGRES_PASSWORD: changeme
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7
ports:
- "6379:6379"
volumes:
pgdata:
What needs translating?
apiandworker: Deploymentspostgres: StatefulSet + PVC + Serviceredis: Deployment + Service (or use managed Redis)- env vars: ConfigMap + Secret
ports: Service + Ingress (for API), internal Service for Redis/Postgresdepends_on: replaced by readiness probes + retry logic
6.2 Build and publish images
Kubernetes pulls images from a registry. If Compose builds locally, you must either:
- push to a registry (recommended), or
- load images into your local cluster (kind supports this)
Option A: push to a registry
Build and tag:
docker build -t ghcr.io/YOUR_ORG/demo-api:1.0.0 ./api
docker push ghcr.io/YOUR_ORG/demo-api:1.0.0
Option B: kind load (local only)
docker build -t demo-api:dev ./api
kind load docker-image demo-api:dev --name compose-migration
In this tutorial, we’ll assume demo-api:dev for local kind usage.
6.3 Create a namespace
Namespaces isolate resources and make cleanup easy.
kubectl create namespace demo
kubectl config set-context --current --namespace=demo
Verify:
kubectl get ns
kubectl get all
6.4 Create Secrets and ConfigMaps
Split config into:
- non-secret: DB host, DB name, redis host, log level
- secret: DB password (and any tokens)
Create a ConfigMap:
kubectl create configmap app-env \
--from-literal=DATABASE_HOST='postgres' \
--from-literal=DATABASE_NAME='appdb' \
--from-literal=DATABASE_USER='app' \
--from-literal=REDIS_HOST='redis' \
--from-literal=LOG_LEVEL='info'
Create a Secret:
kubectl create secret generic app-secrets \
--from-literal=DATABASE_PASSWORD='changeme'
Inspect:
kubectl get configmap app-env -o yaml
kubectl get secret app-secrets -o yaml
6.5 Deploy Postgres (StatefulSet)
In Kubernetes, Postgres needs:
- a Service for stable DNS (
postgres) - a StatefulSet with a volumeClaimTemplates for persistent storage
Create postgres.yaml:
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- name: postgres
port: 5432
targetPort: 5432
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
name: postgres
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: app-env
key: DATABASE_NAME
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: app-env
key: DATABASE_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: DATABASE_PASSWORD
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
readinessProbe:
exec:
command: ["sh", "-c", "pg_isready -U $POSTGRES_USER -d $POSTGRES_DB"]
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
exec:
command: ["sh", "-c", "pg_isready -U $POSTGRES_USER -d $POSTGRES_DB"]
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "1"
memory: "1Gi"
volumeClaimTemplates:
- metadata:
name: pgdata
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Apply:
kubectl apply -f postgres.yaml
kubectl get pods -w
Why this design:
- The Service gives a stable DNS name
postgres - The StatefulSet ensures stable identity and binds the PVC to the pod
- The probes ensure other services can wait until DB is actually accepting connections
6.6 Deploy the API (Deployment)
We’ll create:
- a Service for internal routing and for Ingress to target
- a Deployment with readiness/liveness probes
Create api.yaml:
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- name: http
port: 8080
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: demo-api:dev
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: http
envFrom:
- configMapRef:
name: app-env
- secretRef:
name: app-secrets
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 2
failureThreshold: 6
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 2
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "1"
memory: "512Mi"
Apply:
kubectl apply -f api.yaml
kubectl get deploy,po,svc
What about depends_on?
Kubernetes won’t start api “after postgres” in a strict sense. Instead:
- Your app should retry DB connections on startup (best practice anyway)
- Your readiness probe should fail until dependencies are reachable
- That prevents traffic from reaching the pod until it’s truly ready
If your API must run DB migrations, consider:
- a separate Job for migrations
- or an initContainer that waits for DB and runs migrations carefully
6.7 Expose traffic (Service + Ingress)
For local kind, you can test quickly using port-forward:
kubectl port-forward svc/api 8080:8080
curl -i http://localhost:8080/health/live
For a more Kubernetes-like approach, use an Ingress. First ensure you have an ingress controller installed (see earlier NGINX install for kind).
Create ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
spec:
rules:
- host: api.localtest.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 8080
Apply:
kubectl apply -f ingress.yaml
kubectl get ingress
Then test (for many local setups, localtest.me resolves to 127.0.0.1):
curl -i http://api.localtest.me/health/live
In a managed cluster, you’d typically add TLS and maybe use cert-manager. The important concept is: Ingress routes HTTP(S) to Services, not directly to pods.
6.8 Validate and debug
Kubernetes debugging is a skill. These commands cover most early migration issues.
Watch pods and events
kubectl get pods -w
kubectl get events --sort-by=.metadata.creationTimestamp
Inspect logs
kubectl logs deploy/api
kubectl logs -f deploy/api
kubectl logs statefulset/postgres
If multiple replicas:
kubectl logs -l app=api --tail=100
Describe resources (often reveals the real problem)
kubectl describe pod <pod-name>
kubectl describe deploy api
kubectl describe statefulset postgres
Exec into a container
kubectl exec -it deploy/api -- sh
Test DNS and connectivity from inside the cluster:
# inside the api pod
nslookup postgres
nc -vz postgres 5432
Common early failures
ImagePullBackOff: image not found / registry auth missingCrashLoopBackOff: app exits; check logsPending: PVC not bound (no storage class), or insufficient resources- readiness probe failing: endpoint path wrong or app not ready
7. Production hardening checklist
Once it runs, make it reliable and secure. Here’s what typically distinguishes “it works” from “it’s production-ready”.
Reliability and rollout safety
- Add PodDisruptionBudgets for critical services
- Use RollingUpdate strategy (default for Deployments) and verify maxUnavailable/maxSurge
- Add startupProbe for slow-starting apps
- Ensure readiness probes reflect real readiness (DB connected, migrations done, caches warmed if required)
Security
- Run as non-root (securityContext)
- Drop Linux capabilities you don’t need
- Use read-only root filesystem when possible
- Use NetworkPolicies (if your cluster supports them)
- Avoid putting secrets in ConfigMaps or images
Observability
- Structured logs to stdout/stderr
- Metrics endpoint (Prometheus format if possible)
- Tracing (OpenTelemetry)
- Central log aggregation (Loki/ELK/cloud logging)
Scaling
- Add HorizontalPodAutoscaler based on CPU or custom metrics
- Ensure requests/limits are set so autoscaling works meaningfully
Data
- Prefer managed databases for production
- If self-hosting Postgres:
- backups (CronJob + off-cluster storage)
- upgrades strategy
- anti-affinity / node selectors
- persistent volume performance and IOPS sizing
8. Migration strategy: incremental steps that reduce risk
A practical migration is usually staged:
- Containerize cleanly
- Ensure each service has a reliable Dockerfile
- Ensure startup is deterministic and supports retries
- Run on Kubernetes locally
- Use kind/minikube
- Fix image, config, and networking assumptions
- Externalize state
- Move DB/redis to managed services if possible
- Or at least validate PVC behavior and backup approach
- Introduce Ingress and TLS
- Route traffic through Ingress controller
- Add TLS termination and certificates
- Add CI/CD
- Build and push images
- Apply manifests (or Helm/Kustomize)
- Add autoscaling and policies
- HPA, PDB, NetworkPolicies, resource quotas
- Cut over gradually
- Blue/green or canary
- DNS switch or load balancer weighting
- Rollback plan tested
The goal is to avoid a “big bang” rewrite.
9. Common pitfalls when moving from Compose
Pitfall: assuming container start order is guaranteed
Compose’s depends_on often creates a false sense of safety. Even in Compose, it doesn’t guarantee the dependency is ready, only started. In Kubernetes, you must:
- implement retries
- use readiness probes
- optionally use init containers for “wait-for” behavior (sparingly)
Pitfall: using local bind mounts for everything
Bind mounts are convenient locally, but in Kubernetes:
- pods can move nodes
- local paths may not exist
- permissions differ
Use:
- ConfigMaps for config files
- PVCs for data
- object storage for shared artifacts
Pitfall: exposing every service with NodePort/LoadBalancer
In Compose, it’s common to map many ports to localhost. In Kubernetes:
- keep internal services internal (ClusterIP)
- expose only what must be public via Ingress (HTTP) or a controlled LoadBalancer (TCP)
Pitfall: skipping resource requests
Without requests, scheduling and autoscaling become unpredictable. Set them early, even if rough.
Pitfall: putting secrets in env files committed to git
Kubernetes makes it easy to separate secrets, but you still need a secure workflow. Don’t commit real secrets; use secret managers and CI injection.
10. Where to go next
Once you have a basic migration working, consider adopting one of these packaging approaches:
- Kustomize (built into kubectl) for overlays (dev/staging/prod)
- Example command:
kubectl apply -k ./k8s/overlays/dev
- Example command:
- Helm for templating and versioned releases
- Example:
helm install demo ./chart -n demo
- Example:
- GitOps (Argo CD / Flux) for continuous reconciliation from git
Also consider replacing in-cluster stateful services with managed equivalents where possible. Kubernetes excels at stateless workloads; it can run databases, but you take on operational responsibility.
Appendix: Minimal command recap
Create namespace and config:
kubectl create namespace demo
kubectl config set-context --current --namespace=demo
kubectl create configmap app-env \
--from-literal=DATABASE_HOST='postgres' \
--from-literal=DATABASE_NAME='appdb' \
--from-literal=DATABASE_USER='app' \
--from-literal=REDIS_HOST='redis' \
--from-literal=LOG_LEVEL='info'
kubectl create secret generic app-secrets \
--from-literal=DATABASE_PASSWORD='changeme'
Apply manifests:
kubectl apply -f postgres.yaml
kubectl apply -f api.yaml
kubectl apply -f ingress.yaml
Observe and debug:
kubectl get pods -w
kubectl logs -f deploy/api
kubectl describe pod <pod>
kubectl port-forward svc/api 8080:8080
If you share your actual docker-compose.yml (redacting secrets), I can map each service to a recommended Kubernetes design (Deployment vs StatefulSet), propose probes, and outline a clean directory structure for Kustomize or Helm—while keeping the migration incremental.