← Back to Tutorials

Docker Compose 101: Orchestrating Multi-Container Apps for Local Development

docker composedockerlocal developmentcontainersdev environmentbeginner tutorialmicroservicesyaml

Docker Compose 101: Orchestrating Multi-Container Apps for Local Development

Docker Compose is the most practical way to run multiple containers together on your laptop as a single “application”: a web server plus a database, a cache, a queue, a worker, an admin UI, and so on. Instead of remembering long docker run ... commands and wiring networks/volumes manually, you describe the whole stack in one file and manage it with a handful of commands.

This tutorial is a hands-on, end-to-end guide focused on local development. You’ll build a small multi-container app, learn how networking works, persist data correctly, manage environment variables, run one-off tasks (migrations, shells), understand profiles, healthchecks, logs, and common pitfalls.


Prerequisites

Verify your installation:

docker version
docker compose version

Note: Modern Docker uses docker compose (a plugin). Older installs used docker-compose. This tutorial uses docker compose.


Why Compose for local development?

A typical local stack might require:

Without Compose, you’d have to:

Compose solves this by treating your app as a project with named services, shared networks, and declarative configuration.


Core concepts (mental model)

1) Services

A service is a container definition (or a group of identical containers) in compose.yaml. Example services: web, db, redis.

2) Networks

Compose creates a default network for the project. Services can reach each other by service name as a DNS hostname.

3) Volumes

Volumes persist data beyond the container lifecycle. For databases, volumes are essential.

4) Project name

Compose groups resources (containers, networks, volumes) under a project name derived from the directory name, or set via --project-name / COMPOSE_PROJECT_NAME.


Project structure

Create a new folder:

mkdir compose-101
cd compose-101

We’ll build:

Directory layout:

compose-101/
  compose.yaml
  api/
    Dockerfile
    package.json
    server.js
  .env

Step 1: Create the API service (Node.js)

Create api/package.json:

{
  "name": "compose-101-api",
  "version": "1.0.0",
  "main": "server.js",
  "type": "commonjs",
  "scripts": {
    "start": "node server.js"
  },
  "dependencies": {
    "pg": "^8.11.5"
  }
}

Create api/server.js:

const http = require("http");
const { Client } = require("pg");

const PORT = process.env.PORT || 3000;

function makePgClient() {
  return new Client({
    host: process.env.PGHOST || "db",
    port: Number(process.env.PGPORT || 5432),
    user: process.env.PGUSER || "app",
    password: process.env.PGPASSWORD || "app",
    database: process.env.PGDATABASE || "appdb",
  });
}

async function ensureSchema() {
  const client = makePgClient();
  await client.connect();
  await client.query(`
    CREATE TABLE IF NOT EXISTS visits (
      id SERIAL PRIMARY KEY,
      visited_at TIMESTAMPTZ NOT NULL DEFAULT now()
    );
  `);
  await client.end();
}

async function recordVisit() {
  const client = makePgClient();
  await client.connect();
  await client.query(`INSERT INTO visits DEFAULT VALUES;`);
  const { rows } = await client.query(`SELECT count(*)::int AS count FROM visits;`);
  await client.end();
  return rows[0].count;
}

const server = http.createServer(async (req, res) => {
  if (req.url === "/health") {
    res.writeHead(200, { "content-type": "application/json" });
    res.end(JSON.stringify({ ok: true }));
    return;
  }

  try {
    await ensureSchema();
    const count = await recordVisit();
    res.writeHead(200, { "content-type": "application/json" });
    res.end(JSON.stringify({ message: "Hello from Compose!", visits: count }));
  } catch (err) {
    res.writeHead(500, { "content-type": "application/json" });
    res.end(JSON.stringify({ error: err.message }));
  }
});

server.listen(PORT, () => {
  console.log(`API listening on port ${PORT}`);
});

Create api/Dockerfile:

FROM node:20-alpine

WORKDIR /app

# Install dependencies first for better layer caching
COPY package.json package-lock.json* ./
RUN npm install --omit=dev

COPY server.js ./

EXPOSE 3000
CMD ["npm", "start"]

Step 2: Add environment variables

Create .env in the project root:

cat > .env <<'EOF'
POSTGRES_USER=app
POSTGRES_PASSWORD=app
POSTGRES_DB=appdb

# Expose API on localhost:8080
API_PORT=8080
EOF

Compose automatically reads a .env file in the same directory as compose.yaml for variable substitution (and also makes it easy to keep secrets out of the YAML). For local dev, .env is common; for production you’d use a secrets manager or CI/CD env vars.


Step 3: Write compose.yaml

Create compose.yaml:

services:
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - pgdata:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 5s
      timeout: 3s
      retries: 20

  adminer:
    image: adminer:4
    ports:
      - "8081:8080"
    depends_on:
      db:
        condition: service_healthy

  api:
    build:
      context: ./api
    environment:
      PORT: 3000
      PGHOST: db
      PGPORT: 5432
      PGUSER: ${POSTGRES_USER}
      PGPASSWORD: ${POSTGRES_PASSWORD}
      PGDATABASE: ${POSTGRES_DB}
    ports:
      - "${API_PORT}:3000"
    depends_on:
      db:
        condition: service_healthy

volumes:
  pgdata:

What this file is doing (deep explanation)


Step 4: Start the stack

From the project root:

docker compose up -d --build

Check status:

docker compose ps

Tail logs:

docker compose logs -f api

Test the API:

curl -s http://localhost:8080 | jq .

If you don’t have jq, just run:

curl -s http://localhost:8080

You should see JSON with an increasing visits count each time you refresh.


Step 5: Use service-to-service networking correctly

Inside Compose, containers communicate over the project network. The key rule:

If your API container tried to connect to localhost:5432, it would be connecting to itself, not the database.

You can verify DNS resolution from inside the API container:

docker compose exec api sh -lc "getent hosts db && nc -zv db 5432"

Step 6: Run one-off commands (migrations, shells, debugging)

Compose makes it easy to run ad-hoc commands in the context of a service.

Open a shell in the API container

docker compose exec api sh

Run a one-off container (not attached to the running one)

This is useful for tasks like database migrations, scripts, or tests:

docker compose run --rm api node -e "console.log('one-off task')"

Connect to Postgres via psql from inside the DB container

docker compose exec db psql -U "$POSTGRES_USER" -d "$POSTGRES_DB"

If your shell doesn’t have those variables exported, use explicit values:

docker compose exec db psql -U app -d appdb

Then in psql:

SELECT now();
SELECT count(*) FROM visits;

Exit with \q.


Step 7: Understand volumes and persistence

Stop the stack:

docker compose down

Bring it back:

docker compose up -d

Your visits count should continue increasing from the previous value because the pgdata volume persisted.

Inspect volumes

List volumes:

docker volume ls

See which volume belongs to this project (it will be prefixed by the project name). Inspect it:

docker volume inspect compose-101_pgdata

Reset the database (dangerous)

If you want a clean slate:

docker compose down -v

The -v removes named volumes declared in the Compose file, including pgdata.


Step 8: Rebuilds, restarts, and “why didn’t my change apply?”

Rebuild the API image

If you change server.js in this simple setup, you must rebuild:

docker compose build api
docker compose up -d

Or in one command:

docker compose up -d --build

Restart a single service

docker compose restart api

Recreate containers even if config didn’t change

docker compose up -d --force-recreate

Step 9: Bind mounts for live code editing (dev mode)

For local development, you often want to edit code on your host and have the container use it immediately. That’s done with a bind mount.

Update the api service in compose.yaml (replace the api section with this dev-friendly version):

  api:
    build:
      context: ./api
    environment:
      PORT: 3000
      PGHOST: db
      PGPORT: 5432
      PGUSER: ${POSTGRES_USER}
      PGPASSWORD: ${POSTGRES_PASSWORD}
      PGDATABASE: ${POSTGRES_DB}
    ports:
      - "${API_PORT}:3000"
    volumes:
      - ./api:/app
    command: sh -lc "npm install && npm start"
    depends_on:
      db:
        condition: service_healthy

Now changes to api/server.js are reflected immediately (though Node itself won’t auto-restart unless you add a watcher like nodemon). This pattern is common:

Apply changes:

docker compose up -d --build
docker compose logs -f api

Tip: For a more realistic dev setup, add nodemon and run it as the command. For brevity, we’re keeping it simple.


Step 10: Logs and observability

Follow logs for all services

docker compose logs -f

Tail only the last N lines

docker compose logs --tail=100 -f db

View resource usage

docker stats

Inspect a container

docker compose ps
docker inspect <container_id>

Step 11: Healthchecks and readiness (why it matters)

A frequent local-dev problem: the API starts, tries to connect to Postgres, fails, and exits (or keeps retrying). Containers are not “ready” just because they started.

You can see health status in:

docker compose ps

If Postgres healthcheck fails, view DB logs:

docker compose logs db

Step 12: Profiles (optional services)

Profiles let you enable/disable services depending on what you’re doing. For example, you might only run adminer when you need it.

Modify adminer:

  adminer:
    image: adminer:4
    ports:
      - "8081:8080"
    profiles: ["tools"]
    depends_on:
      db:
        condition: service_healthy

Now:

docker compose up -d
docker compose --profile tools up -d

This is a clean way to keep your default stack minimal.


Step 13: Common Compose commands (cheat sheet)

Start / stop

docker compose up -d
docker compose down

Rebuild

docker compose up -d --build
docker compose build

List services/containers

docker compose ps
docker compose config

docker compose config is extremely useful: it prints the fully-resolved configuration after variable substitution and merges.

Execute commands in running containers

docker compose exec api sh
docker compose exec db psql -U app -d appdb

One-off tasks

docker compose run --rm api node -e "console.log('hello')"

Remove everything (including volumes)

docker compose down -v --remove-orphans

Step 14: Best practices for local development

1) Keep secrets out of the Compose file

Use .env for local-only defaults, and avoid committing real credentials.

Add .env to .gitignore if it contains sensitive values:

echo ".env" >> .gitignore

2) Prefer named volumes for databases

Bind mounting database directories can lead to permission/performance issues across OSes.

3) Don’t publish every port

Only publish what you need on localhost. Internal services can stay unexposed and still be reachable by other containers.

For example, you could remove ports from db and connect to it only from api and adminer. If you still want host access for tools, keep the port mapping.

4) Use docker compose config to debug

If variables aren’t being substituted as expected:

docker compose config

5) Be explicit about readiness

Use healthchecks for databases, queues, and anything that takes time to initialize.


Troubleshooting (real issues you will hit)

“Port is already allocated”

You mapped a host port that’s in use.

Check what’s listening:

lsof -iTCP -sTCP:LISTEN -n -P | grep 8080

Fix by changing API_PORT in .env or updating ports.

“Connection refused” from API to DB

Common causes:

Verify from inside the API container:

docker compose exec api sh -lc "nc -zv db 5432"

“My changes aren’t reflected”

If you didn’t set up a bind mount, you must rebuild the image:

docker compose up -d --build

If you did set up a bind mount but the process doesn’t reload, add a watcher (e.g., nodemon) or restart the service:

docker compose restart api

“I changed POSTGRES_* but it didn’t apply”

Postgres initialization variables only apply on first initialization. If the volume already exists, Postgres keeps the existing database.

Reset volume:

docker compose down -v
docker compose up -d

Where to go next

You now have a working multi-container local environment with:

Next steps you can explore:


Full final compose.yaml (baseline version)

If you want a clean copy (without the bind-mount dev tweak), here is the baseline again:

services:
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - pgdata:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 5s
      timeout: 3s
      retries: 20

  adminer:
    image: adminer:4
    ports:
      - "8081:8080"
    depends_on:
      db:
        condition: service_healthy

  api:
    build:
      context: ./api
    environment:
      PORT: 3000
      PGHOST: db
      PGPORT: 5432
      PGUSER: ${POSTGRES_USER}
      PGPASSWORD: ${POSTGRES_PASSWORD}
      PGDATABASE: ${POSTGRES_DB}
    ports:
      - "${API_PORT}:3000"
    depends_on:
      db:
        condition: service_healthy

volumes:
  pgdata:

With this, you can reliably run:

docker compose up -d --build
curl http://localhost:8080
docker compose down