← Back to Tutorials

From Docker to Kubernetes: A Practical Migration Path for Existing Compose Projects

docker composekubernetesdevopsmigrationkomposehelmkustomizeci/cdingresssecrets

From Docker to Kubernetes: A Practical Migration Path for Existing Compose Projects

Migrating from Docker Compose to Kubernetes is less about “rewriting everything” and more about translating the operational intent of your Compose file into Kubernetes primitives: deployments, services, config, secrets, storage, and ingress. This tutorial walks a pragmatic path you can follow for real-world Compose projects—starting with what you already have, getting something running quickly, then hardening it into a maintainable Kubernetes setup.

You’ll see real commands, concrete translation patterns, and the “why” behind each Kubernetes concept so you can make good decisions rather than copy/paste manifests blindly.


Table of contents


1. Prerequisites and mental model

Docker Compose is primarily a single-host orchestrator. Even when you run it on a VM in the cloud, the underlying assumption is: one machine, one Docker daemon, one network namespace, and local disks.

Kubernetes is a cluster orchestrator. It assumes:

The key mental shift:

In Compose you “start containers”. In Kubernetes you declare the desired state, and controllers keep it true.

Tools you’ll use

Install these locally:

kubectl version --client
docker version

Optional but highly recommended:


2. Understand what Compose is doing today

Before migrating, inventory what your Compose project relies on. Run:

docker compose config

This renders the fully-resolved configuration (after env substitution and merges). Save it:

docker compose config > compose.rendered.yml

Now identify:

This inventory drives your Kubernetes design.


3. Choose a Kubernetes environment

You need a cluster to migrate into. Common choices:

Local development clusters

kind (Kubernetes in Docker) is excellent for fast local iteration:

kind create cluster --name compose-migration
kubectl cluster-info

minikube is also popular and has add-ons:

minikube start
kubectl get nodes

Managed clusters (production-like)

For a first migration, it’s often best to:

  1. Validate manifests on kind
  2. Deploy to a staging cluster that matches production
  3. Promote to production

4. Quick win: generate Kubernetes manifests from Compose

If your goal is “get it running” quickly, use Kompose to generate a baseline. It won’t produce perfect production manifests, but it’s a useful accelerator.

Install Kompose (example for Linux):

curl -L https://github.com/kubernetes/kompose/releases/download/v1.35.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv kompose /usr/local/bin/kompose
kompose version

Convert:

kompose convert -f docker-compose.yml
ls -1

You’ll typically get files like:

Apply to a cluster:

kubectl apply -f .
kubectl get pods

Why you shouldn’t stop here

Kompose can’t infer:

Treat generated manifests as a draft.


5. The “manual” translation: core building blocks

This section explains how Compose concepts map to Kubernetes, and why.

5.1 Deployments vs StatefulSets

Compose service usually becomes a Deployment:

Use a StatefulSet when:

Rule of thumb:

5.2 Services: stable networking

In Compose, services discover each other by service name on a shared network.

In Kubernetes:

Example:

Service types:

5.3 ConfigMaps and Secrets

Compose commonly uses:

In Kubernetes, best practice is:

Create a Secret from literal values:

kubectl create secret generic app-secrets \
  --from-literal=DATABASE_URL='postgres://app:changeme@postgres:5432/appdb' \
  --from-literal=JWT_SECRET='replace-me' \
  -n demo

Create a ConfigMap from a file:

kubectl create configmap app-config \
  --from-file=./config/app.properties \
  -n demo

Important nuance: Kubernetes Secrets are base64-encoded, not encrypted by default. For production, consider:

5.4 Storage: volumes, PVCs, and StorageClasses

Compose volumes are often local directories or named volumes. In Kubernetes, storage is abstracted:

In local clusters, default storage is often hostPath-like and not production-grade. In managed clusters, a PVC typically provisions a cloud disk automatically.

Key decision:

5.5 Health checks: readiness vs liveness

Compose healthcheck is a single concept. Kubernetes splits it:

This split matters a lot. A pod can be alive but not ready (e.g., waiting for migrations).

5.6 Resource requests/limits

Compose rarely enforces resource usage. Kubernetes scheduling depends on it:

Example:

resources:
  requests:
    cpu: "250m"
    memory: "256Mi"
  limits:
    cpu: "1"
    memory: "512Mi"

Without requests, the scheduler can overpack nodes and you’ll see noisy-neighbor issues.

5.7 Ingress: HTTP routing into the cluster

Compose typically uses ports: "8080:8080" to expose services on the host.

In Kubernetes you usually:

On kind, you can install NGINX ingress (one common approach):

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl -n ingress-nginx get pods

6. A realistic example: migrating a Compose app

We’ll migrate a small but realistic stack:

6.1 Example Compose file

Here’s a representative docker-compose.yml:

services:
  api:
    build: ./api
    ports:
      - "8080:8080"
    environment:
      DATABASE_HOST: postgres
      DATABASE_NAME: appdb
      DATABASE_USER: app
      DATABASE_PASSWORD: changeme
      REDIS_HOST: redis
      LOG_LEVEL: info
    depends_on:
      - postgres
      - redis

  worker:
    build: ./api
    command: ["./worker"]
    environment:
      DATABASE_HOST: postgres
      DATABASE_NAME: appdb
      DATABASE_USER: app
      DATABASE_PASSWORD: changeme
      REDIS_HOST: redis
    depends_on:
      - postgres
      - redis

  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: appdb
      POSTGRES_USER: app
      POSTGRES_PASSWORD: changeme
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:7
    ports:
      - "6379:6379"

volumes:
  pgdata:

What needs translating?


6.2 Build and publish images

Kubernetes pulls images from a registry. If Compose builds locally, you must either:

Option A: push to a registry

Build and tag:

docker build -t ghcr.io/YOUR_ORG/demo-api:1.0.0 ./api
docker push ghcr.io/YOUR_ORG/demo-api:1.0.0

Option B: kind load (local only)

docker build -t demo-api:dev ./api
kind load docker-image demo-api:dev --name compose-migration

In this tutorial, we’ll assume demo-api:dev for local kind usage.


6.3 Create a namespace

Namespaces isolate resources and make cleanup easy.

kubectl create namespace demo
kubectl config set-context --current --namespace=demo

Verify:

kubectl get ns
kubectl get all

6.4 Create Secrets and ConfigMaps

Split config into:

Create a ConfigMap:

kubectl create configmap app-env \
  --from-literal=DATABASE_HOST='postgres' \
  --from-literal=DATABASE_NAME='appdb' \
  --from-literal=DATABASE_USER='app' \
  --from-literal=REDIS_HOST='redis' \
  --from-literal=LOG_LEVEL='info'

Create a Secret:

kubectl create secret generic app-secrets \
  --from-literal=DATABASE_PASSWORD='changeme'

Inspect:

kubectl get configmap app-env -o yaml
kubectl get secret app-secrets -o yaml

6.5 Deploy Postgres (StatefulSet)

In Kubernetes, Postgres needs:

Create postgres.yaml:

apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  ports:
    - name: postgres
      port: 5432
      targetPort: 5432
  selector:
    app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: postgres
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:16
          ports:
            - containerPort: 5432
              name: postgres
          env:
            - name: POSTGRES_DB
              valueFrom:
                configMapKeyRef:
                  name: app-env
                  key: DATABASE_NAME
            - name: POSTGRES_USER
              valueFrom:
                configMapKeyRef:
                  name: app-env
                  key: DATABASE_USER
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: DATABASE_PASSWORD
          volumeMounts:
            - name: pgdata
              mountPath: /var/lib/postgresql/data
          readinessProbe:
            exec:
              command: ["sh", "-c", "pg_isready -U $POSTGRES_USER -d $POSTGRES_DB"]
            initialDelaySeconds: 5
            periodSeconds: 5
          livenessProbe:
            exec:
              command: ["sh", "-c", "pg_isready -U $POSTGRES_USER -d $POSTGRES_DB"]
            initialDelaySeconds: 20
            periodSeconds: 10
          resources:
            requests:
              cpu: "100m"
              memory: "256Mi"
            limits:
              cpu: "1"
              memory: "1Gi"
  volumeClaimTemplates:
    - metadata:
        name: pgdata
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 10Gi

Apply:

kubectl apply -f postgres.yaml
kubectl get pods -w

Why this design:


6.6 Deploy the API (Deployment)

We’ll create:

Create api.yaml:

apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  selector:
    app: api
  ports:
    - name: http
      port: 8080
      targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: demo-api:dev
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
              name: http
          envFrom:
            - configMapRef:
                name: app-env
            - secretRef:
                name: app-secrets
          readinessProbe:
            httpGet:
              path: /health/ready
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
            timeoutSeconds: 2
            failureThreshold: 6
          livenessProbe:
            httpGet:
              path: /health/live
              port: 8080
            initialDelaySeconds: 20
            periodSeconds: 10
            timeoutSeconds: 2
          resources:
            requests:
              cpu: "200m"
              memory: "256Mi"
            limits:
              cpu: "1"
              memory: "512Mi"

Apply:

kubectl apply -f api.yaml
kubectl get deploy,po,svc

What about depends_on?

Kubernetes won’t start api “after postgres” in a strict sense. Instead:

If your API must run DB migrations, consider:


6.7 Expose traffic (Service + Ingress)

For local kind, you can test quickly using port-forward:

kubectl port-forward svc/api 8080:8080
curl -i http://localhost:8080/health/live

For a more Kubernetes-like approach, use an Ingress. First ensure you have an ingress controller installed (see earlier NGINX install for kind).

Create ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api
spec:
  rules:
    - host: api.localtest.me
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api
                port:
                  number: 8080

Apply:

kubectl apply -f ingress.yaml
kubectl get ingress

Then test (for many local setups, localtest.me resolves to 127.0.0.1):

curl -i http://api.localtest.me/health/live

In a managed cluster, you’d typically add TLS and maybe use cert-manager. The important concept is: Ingress routes HTTP(S) to Services, not directly to pods.


6.8 Validate and debug

Kubernetes debugging is a skill. These commands cover most early migration issues.

Watch pods and events

kubectl get pods -w
kubectl get events --sort-by=.metadata.creationTimestamp

Inspect logs

kubectl logs deploy/api
kubectl logs -f deploy/api
kubectl logs statefulset/postgres

If multiple replicas:

kubectl logs -l app=api --tail=100

Describe resources (often reveals the real problem)

kubectl describe pod <pod-name>
kubectl describe deploy api
kubectl describe statefulset postgres

Exec into a container

kubectl exec -it deploy/api -- sh

Test DNS and connectivity from inside the cluster:

# inside the api pod
nslookup postgres
nc -vz postgres 5432

Common early failures


7. Production hardening checklist

Once it runs, make it reliable and secure. Here’s what typically distinguishes “it works” from “it’s production-ready”.

Reliability and rollout safety

Security

Observability

Scaling

Data


8. Migration strategy: incremental steps that reduce risk

A practical migration is usually staged:

  1. Containerize cleanly
    • Ensure each service has a reliable Dockerfile
    • Ensure startup is deterministic and supports retries
  2. Run on Kubernetes locally
    • Use kind/minikube
    • Fix image, config, and networking assumptions
  3. Externalize state
    • Move DB/redis to managed services if possible
    • Or at least validate PVC behavior and backup approach
  4. Introduce Ingress and TLS
    • Route traffic through Ingress controller
    • Add TLS termination and certificates
  5. Add CI/CD
    • Build and push images
    • Apply manifests (or Helm/Kustomize)
  6. Add autoscaling and policies
    • HPA, PDB, NetworkPolicies, resource quotas
  7. Cut over gradually
    • Blue/green or canary
    • DNS switch or load balancer weighting
    • Rollback plan tested

The goal is to avoid a “big bang” rewrite.


9. Common pitfalls when moving from Compose

Pitfall: assuming container start order is guaranteed

Compose’s depends_on often creates a false sense of safety. Even in Compose, it doesn’t guarantee the dependency is ready, only started. In Kubernetes, you must:

Pitfall: using local bind mounts for everything

Bind mounts are convenient locally, but in Kubernetes:

Use:

Pitfall: exposing every service with NodePort/LoadBalancer

In Compose, it’s common to map many ports to localhost. In Kubernetes:

Pitfall: skipping resource requests

Without requests, scheduling and autoscaling become unpredictable. Set them early, even if rough.

Pitfall: putting secrets in env files committed to git

Kubernetes makes it easy to separate secrets, but you still need a secure workflow. Don’t commit real secrets; use secret managers and CI injection.


10. Where to go next

Once you have a basic migration working, consider adopting one of these packaging approaches:

Also consider replacing in-cluster stateful services with managed equivalents where possible. Kubernetes excels at stateless workloads; it can run databases, but you take on operational responsibility.


Appendix: Minimal command recap

Create namespace and config:

kubectl create namespace demo
kubectl config set-context --current --namespace=demo

kubectl create configmap app-env \
  --from-literal=DATABASE_HOST='postgres' \
  --from-literal=DATABASE_NAME='appdb' \
  --from-literal=DATABASE_USER='app' \
  --from-literal=REDIS_HOST='redis' \
  --from-literal=LOG_LEVEL='info'

kubectl create secret generic app-secrets \
  --from-literal=DATABASE_PASSWORD='changeme'

Apply manifests:

kubectl apply -f postgres.yaml
kubectl apply -f api.yaml
kubectl apply -f ingress.yaml

Observe and debug:

kubectl get pods -w
kubectl logs -f deploy/api
kubectl describe pod <pod>
kubectl port-forward svc/api 8080:8080

If you share your actual docker-compose.yml (redacting secrets), I can map each service to a recommended Kubernetes design (Deployment vs StatefulSet), propose probes, and outline a clean directory structure for Kustomize or Helm—while keeping the migration incremental.