Getting Started with Docker: A Beginner’s Guide
Docker is a platform for building, shipping, and running applications in containers. Containers package your application code together with its dependencies (runtime, libraries, tools, and basic filesystem content) so it runs consistently across different environments: your laptop, a CI server, or the cloud.
This tutorial is a practical, command-heavy introduction designed for beginners. You will learn what Docker is, how it works, and how to use it to run and build containerized apps.
Table of Contents
- 1. What Docker Is (and Why It Exists)
- 2. Key Concepts: Images, Containers, Registries
- 3. Installing Docker
- 4. Your First Container
- 5. Understanding
docker run - 6. Managing Containers
- 7. Working with Images
- 8. Building Images with a Dockerfile
- 9. Volumes: Persisting Data
- 10. Bind Mounts: Live-Editing Files from Your Host
- 11. Networking Basics
- 12. Environment Variables and Configuration
- 13. Docker Compose (Multi-Container Apps)
- 14. Debugging and Troubleshooting
- 15. Best Practices and Next Steps
1. What Docker Is (and Why It Exists)
Before Docker, a common problem was:
- “It works on my machine.”
- “The server has a different version of Python/Node/Java.”
- “We forgot to install a system dependency.”
- “The deployment steps are complicated and inconsistent.”
Docker solves this by using containers, which are lightweight, isolated environments that run on the same host OS kernel but have their own:
- filesystem view
- process tree
- network interfaces (virtualized)
- environment variables
Containers vs Virtual Machines (VMs)
VMs virtualize hardware. Each VM includes a full guest OS, which makes them heavier (more disk, more RAM, slower boot).
Containers virtualize at the OS level. They share the host kernel and isolate processes using kernel features (namespaces, cgroups). This makes containers:
- fast to start (often milliseconds)
- small (images can be tens of MB)
- efficient (many containers can run on one machine)
Docker is not the only container technology, but it is the most common entry point and ecosystem.
2. Key Concepts: Images, Containers, Registries
Image
An image is an immutable template used to create containers. Think of it like a “snapshot” of a filesystem plus metadata (default command, environment variables, exposed ports).
Images are built in layers. Each layer represents a change (like installing packages or copying files). Layering makes builds faster and enables caching.
Container
A container is a running (or stopped) instance of an image. When you start a container, Docker adds a thin writable layer on top of the image layers. Changes you make inside a container (creating files, installing packages) live in that writable layer—unless you use volumes.
Registry
A registry stores and distributes images. The default public registry is Docker Hub, but companies often use private registries.
Common image name formats:
nginx(defaults to Docker Hub library images)nginx:1.25(tagged version)ghcr.io/org/app:1.0.0(GitHub Container Registry)registry.example.com/team/app:prod
3. Installing Docker
Docker Desktop (macOS/Windows)
Install Docker Desktop from Docker’s official site. It includes:
- Docker Engine
- Docker CLI
- Docker Compose
- a lightweight VM to run Linux containers
After installation, verify:
docker version
docker info
Linux
On Linux, you typically install Docker Engine via your package manager (steps vary by distro). After installing, verify:
docker version
docker info
If you get permission errors running Docker commands, you may need to run as root or add your user to the docker group (varies by distro). If you do add yourself to the group, log out and back in for it to take effect.
4. Your First Container
Let’s run a container based on the hello-world image:
docker run hello-world
What happens the first time you run this?
- Docker checks if the image exists locally.
- If not, Docker pulls it from the registry.
- Docker creates a container from the image.
- Docker runs the container’s default command.
- The container prints a message and exits.
Now run an interactive shell inside a small Linux image:
docker run -it --rm alpine:3.19 sh
-itgives you an interactive terminal.--rmremoves the container when it exits (keeps your system clean).alpine:3.19is a minimal Linux distribution.shis the shell command to run.
Inside the container, try:
uname -a
cat /etc/os-release
ls -la
Exit:
exit
5. Understanding docker run
docker run is one of the most important commands. It combines multiple operations:
- pull (if needed)
- create
- start
- attach
Common docker run options
Name your container
docker run --name my-nginx nginx:1.25
Run in the background (detached)
docker run -d --name web nginx:1.25
Map ports
Containers have their own network namespace. To access a container service from your host, you map ports:
docker run -d --name web -p 8080:80 nginx:1.25
-p 8080:80means: host port 8080 → container port 80
Now open:
http://localhost:8080
Set environment variables
docker run -d --name demo -e MY_VAR=hello alpine:3.19 sleep 3600
Check the environment inside:
docker exec -it demo sh -lc 'echo $MY_VAR'
Limit resources (basic examples)
docker run -d --name limited --memory 256m --cpus 0.5 nginx:1.25
Resource limits are important for production stability.
6. Managing Containers
List running containers
docker ps
List all containers (including stopped)
docker ps -a
View logs
docker logs web
Follow logs in real time:
docker logs -f web
Execute a command in a running container
docker exec -it web sh
Note: Many images (like nginx) may not include bash. Use sh unless you know bash exists.
Stop and start
docker stop web
docker start web
Remove a container
You must stop it first (unless you force):
docker rm web
Force remove:
docker rm -f web
Inspect container metadata
docker inspect web
This outputs JSON with networking info, mounts, environment variables, and more. To extract a specific field:
docker inspect -f '{{.NetworkSettings.IPAddress}}' web
7. Working with Images
List images
docker images
Pull an image
docker pull nginx:1.25
Remove an image
docker rmi nginx:1.25
If an image is used by a container, you must remove the container first.
Tag an image
Tagging is how you name an image for pushing to a registry:
docker tag myapp:1.0 myregistry.example.com/team/myapp:1.0
Push an image
You must be logged in and have permission:
docker login myregistry.example.com
docker push myregistry.example.com/team/myapp:1.0
8. Building Images with a Dockerfile
A Dockerfile is a text file that describes how to build an image.
Example: A simple Python web app
Create a folder:
mkdir docker-python-demo
cd docker-python-demo
Create app.py:
from flask import Flask
app = Flask(__name__)
@app.get("/")
def home():
return "Hello from Docker!\n"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
Create requirements.txt:
flask==3.0.0
Now create a Dockerfile:
# Use a small Python base image
FROM python:3.12-slim
# Create and set the working directory
WORKDIR /app
# Copy dependency list first to leverage Docker layer caching
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY app.py .
# Document the port (does not publish it by itself)
EXPOSE 5000
# Default command
CMD ["python", "app.py"]
Build the image
docker build -t python-demo:1.0 .
Key points:
-t python-demo:1.0tags the image with a name and version..is the build context (files available toCOPY).
Run the container
docker run --rm -p 5000:5000 python-demo:1.0
Visit:
http://localhost:5000
Stop with Ctrl+C.
Why order matters in a Dockerfile
Docker caches layers. If requirements.txt doesn’t change, Docker can reuse the dependency installation layer and rebuild much faster. That’s why you often:
- copy dependency files
- install dependencies
- copy the rest of the source code
View build history (layers)
docker history python-demo:1.0
9. Volumes: Persisting Data
Containers are designed to be disposable. If you remove a container, its writable layer is gone. For data you want to keep (databases, uploads, caches), use volumes.
Create a volume
docker volume create mydata
docker volume ls
Use a volume with a container
Example using PostgreSQL:
docker run -d \
--name pg \
-e POSTGRES_PASSWORD=secret \
-v mydata:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:16
-v mydata:/var/lib/postgresql/datamounts the named volume into the container.- If you remove the container, the volume remains.
Remove the container:
docker rm -f pg
Recreate it with the same volume, and your database files persist.
Inspect a volume
docker volume inspect mydata
Remove a volume
Only remove volumes you no longer need:
docker volume rm mydata
To remove unused volumes:
docker volume prune
10. Bind Mounts: Live-Editing Files from Your Host
A bind mount maps a host directory directly into the container. This is common for development because you can edit code locally and see changes immediately inside the container.
Example: Run Nginx serving local files
Create a folder:
mkdir -p site
echo "Hello from my host directory" > site/index.html
Run Nginx with a bind mount:
docker run --rm -p 8080:80 \
-v "$(pwd)/site:/usr/share/nginx/html:ro" \
nginx:1.25
:romakes it read-only inside the container (safer).- Edit
site/index.htmlon your host and refresh the browser.
Volume vs bind mount
- Volume: managed by Docker, good for persistent app data.
- Bind mount: managed by you (host filesystem), good for development workflows.
11. Networking Basics
Docker networking can be simple or advanced. For beginners, focus on:
- port publishing (
-p host:container) - container-to-container communication on a user-defined network
Default bridge network
If you run containers without specifying a network, they go on Docker’s default bridge network. Containers can reach the internet, but name-based discovery is limited.
Create a user-defined bridge network
docker network create app-net
docker network ls
Run two containers on the same network:
docker run -d --name backend --network app-net alpine:3.19 sleep 3600
docker run -it --rm --name client --network app-net alpine:3.19 sh
Inside client, you can resolve backend by name (DNS provided by Docker):
ping -c 1 backend
Exit the client:
exit
Clean up:
docker rm -f backend
docker network rm app-net
Why this matters
In real applications, you often have multiple services (web app, database, cache). A user-defined network allows them to communicate using stable service names.
12. Environment Variables and Configuration
Containers should be configurable without rebuilding images. Environment variables are a common method.
Pass environment variables at runtime
docker run --rm -e APP_MODE=production alpine:3.19 sh -lc 'echo $APP_MODE'
Use an env file
Create .env:
cat > .env <<'EOF'
APP_MODE=development
APP_DEBUG=true
EOF
Run:
docker run --rm --env-file .env alpine:3.19 sh -lc 'env | grep APP_'
Security note
Environment variables can leak through logs, process listings, or misconfiguration. For secrets in production, consider secret managers or Docker secrets (especially in orchestrators). For local learning, environment variables are fine.
13. Docker Compose (Multi-Container Apps)
Docker Compose lets you define and run multi-container applications with a single command.
Why Compose?
Instead of running long docker run ... commands for each service, you define:
- services (containers)
- networks
- volumes
- environment variables
- ports
Then you run:
docker compose up
Example: Flask app + Redis
Create a folder:
mkdir compose-demo
cd compose-demo
Create app.py:
import os
from flask import Flask
import redis
app = Flask(__name__)
r = redis.Redis(host=os.environ.get("REDIS_HOST", "redis"), port=6379, decode_responses=True)
@app.get("/")
def home():
count = r.incr("hits")
return f"Hello! This page has been visited {count} times.\n"
Create requirements.txt:
flask==3.0.0
redis==5.0.1
Create Dockerfile:
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
EXPOSE 5000
CMD ["python", "app.py"]
Create compose.yaml (Compose uses YAML, but you are not adding frontmatter—this is just a normal file):
services:
web:
build: .
ports:
- "5000:5000"
environment:
- REDIS_HOST=redis
depends_on:
- redis
redis:
image: redis:7-alpine
Run the stack:
docker compose up --build
Visit:
http://localhost:5000
Stop with Ctrl+C, then remove containers:
docker compose down
What Compose is doing
- Builds your
webimage from the Dockerfile. - Creates a default network so
webcan reachredisby the hostnameredis. - Starts services in dependency order (note:
depends_ondoes not guarantee the service is “ready,” only started).
14. Debugging and Troubleshooting
1) “Port already in use”
If you see an error like “bind: address already in use,” something on your host is using that port.
- Change the host port:
-p 8081:80 - Or stop the conflicting service.
2) Container exits immediately
Check logs:
docker logs <container-name>
Inspect exit code:
docker ps -a
Often the main process crashed, or the container ran a short-lived command and finished.
3) “Command not found” inside container
Minimal images (Alpine, slim variants) may not include tools like bash, curl, or ping. You can:
- use
shinstead ofbash - install tools temporarily (for debugging)
- use a separate “debug container” on the same network
Example debug container on the same network:
docker run -it --rm --network app-net alpine:3.19 sh
4) Inspect networking and ports
See port mappings:
docker port web
Inspect container IP and networks:
docker inspect web
5) Clean up unused resources
Remove stopped containers:
docker container prune
Remove unused images:
docker image prune
Remove everything unused (be careful):
docker system prune
15. Best Practices and Next Steps
Use small base images (but don’t sacrifice clarity)
python:3.12-slimis often a good balance.alpineis smaller but can introduce compatibility issues for some libraries.
Pin versions
Prefer:
python:3.12-slimoverpython:latestnginx:1.25overnginx
Pinning makes builds reproducible.
Keep images lean
- Use
--no-cache-dirfor pip - Remove package manager caches where relevant
- Avoid installing unnecessary tools in production images
One main process per container (usually)
A common pattern is one service per container. If you need multiple processes, consider a process manager, but understand why.
Learn the core commands well
Useful commands to practice:
docker run
docker ps
docker logs
docker exec
docker build
docker images
docker pull
docker push
docker volume ls
docker network ls
docker compose up
docker compose down
Where to go next
Once you are comfortable with basics, explore:
- multi-stage builds (smaller production images)
- healthchecks
.dockerignoreto speed up builds- security scanning and least-privilege containers
- orchestration (Kubernetes, Docker Swarm, or managed container platforms)
Quick Reference: Common Commands
# Run a container
docker run --rm -it alpine:3.19 sh
# Run in background with port mapping
docker run -d --name web -p 8080:80 nginx:1.25
# View logs
docker logs -f web
# Execute a command in a running container
docker exec -it web sh
# Build an image
docker build -t myapp:1.0 .
# List containers/images
docker ps
docker images
# Clean up
docker system prune
If you want, tell me your OS (Windows/macOS/Linux) and a language you want to containerize (Python/Node/Go/Java), and I can provide a tailored example project with a Dockerfile and Compose setup.