Intermediate Guide: Practical Tips, Best Practices, and Next Steps
This tutorial is written for people who already know the basics of working in a terminal, using Git, and shipping small projects—but want to level up into repeatable, professional workflows. It focuses on practical improvements: structuring projects, automating checks, improving reliability, and preparing for deployment and collaboration.
Table of Contents
- Mindset: From “It Works” to “It Keeps Working”
- Project Structure That Scales
- Environment Management and Reproducibility
- Git Workflows Beyond the Basics
- Testing: Strategy, Coverage, and Fast Feedback
- Linting, Formatting, and Static Analysis
- Logging, Configuration, and Secrets
- Documentation That Actually Helps
- CI/CD: Automate Quality and Delivery
- Containers and Local Parity
- Performance and Profiling
- Security Basics for Everyday Work
- Release Management and Versioning
- Next Steps: A Practical Growth Plan
Mindset: From “It Works” to “It Keeps Working”
Intermediate work is less about clever code and more about systems:
- Reproducibility: Can someone else run your project tomorrow and get the same result?
- Observability: When something fails, can you quickly find out why?
- Automation: Are checks enforced automatically (CI), not by memory?
- Maintainability: Can you change one part without breaking everything else?
A useful mental model is to treat your project like a product, even if it’s “just a script.” The goal is to reduce the number of things you must remember.
Project Structure That Scales
A common intermediate pain: a project starts as a single file, then grows into a mess. A scalable structure:
- separates source code from tests
- isolates configuration
- keeps scripts and tooling organized
- supports packaging and deployment
Example: A Python service layout (adaptable to other languages)
myapp/
README.md
pyproject.toml
src/
myapp/
__init__.py
main.py
config.py
routes.py
tests/
test_routes.py
scripts/
dev.sh
lint.sh
.gitignore
.env.example
Why this helps:
src/layout prevents accidental imports from the working directory.tests/is clearly separate and easy for CI to discover.scripts/keeps repeatable commands in version control.
Enforce structure with simple checks
You can add a minimal “sanity check” script:
#!/usr/bin/env bash
set -euo pipefail
test -f pyproject.toml
test -d src
test -d tests
echo "Project structure OK"
Run it in CI so structure doesn’t regress.
Environment Management and Reproducibility
If your project works only on your machine, it’s not done. Reproducibility means:
- pinned dependencies
- consistent runtime versions
- documented setup steps
- isolated environments
Pin dependencies (Python example)
Use pip-tools or lock files. With uv (modern, fast), you can do:
uv init
uv add requests
uv lock
Or with Poetry:
poetry init
poetry add requests
poetry lock
Best practice: commit the lock file (uv.lock or poetry.lock) so CI and teammates install the same versions.
Record runtime versions
Create a .tool-versions (asdf) or document versions in README.md.
Example .tool-versions:
python 3.12.2
nodejs 20.11.1
Use .env.example and never commit real secrets
Create:
cp .env.example .env
.env.example might contain:
APP_ENV=development
DATABASE_URL=postgresql://user:pass@localhost:5432/mydb
LOG_LEVEL=INFO
Then add .env to .gitignore.
Git Workflows Beyond the Basics
Intermediate Git is about clean history, safe collaboration, and recoverability.
Branch naming and small PRs
Use consistent branch names:
feature/add-login-rate-limit
fix/null-pointer-on-startup
chore/update-deps
Keep branches focused; small PRs are easier to review and less risky.
Commit messages that age well
A useful format:
type(scope): summary
Why:
- reason
What:
- key changes
Example:
fix(auth): prevent token reuse after logout
Why:
- tokens remained valid after logout in some flows
What:
- invalidate refresh tokens in DB
- add regression test
Rebase vs merge (practical guidance)
- Rebase local work to keep a linear history before opening a PR.
- Merge into
mainvia PR to preserve review context.
Rebase your branch onto main:
git fetch origin
git rebase origin/main
Resolve conflicts, then:
git push --force-with-lease
Use --force-with-lease instead of --force to avoid overwriting others’ work.
Useful Git commands you should be comfortable with
Inspect changes:
git diff
git diff --staged
Interactive staging (great for clean commits):
git add -p
Find the commit that introduced a bug:
git bisect start
git bisect bad
git bisect good <known_good_commit>
# test each step, then mark:
git bisect good
git bisect bad
git bisect reset
Undo safely:
git revert <commit_sha>
Use revert for shared branches; it creates a new commit that undoes changes.
Testing: Strategy, Coverage, and Fast Feedback
Testing at an intermediate level is about selecting the right kinds of tests and making them fast enough to run constantly.
Test pyramid (practical version)
- Unit tests: fast, isolated, many
- Integration tests: fewer, hit real components (DB, filesystem)
- End-to-end tests: minimal, cover critical user flows
If your test suite is slow, people stop running it. Optimize for speed and reliability.
Example commands (Python + pytest)
Install:
python -m pip install -U pytest pytest-cov
Run tests:
pytest
Run with coverage:
pytest --cov=src --cov-report=term-missing
Fail if coverage drops below a threshold:
pytest --cov=src --cov-fail-under=85
Make tests deterministic
Common sources of flaky tests:
- time (
datetime.now()) - randomness
- network calls
- concurrency
Fixes:
- inject clocks (pass
now_fninto functions) - seed randomness
- mock external APIs
- avoid real sleeps; use timeouts and polling
Example: seeding randomness in Python:
import random
random.seed(0)
Use test selection for speed
Run only tests matching a keyword:
pytest -k "auth and not slow"
Mark slow tests:
import pytest
@pytest.mark.slow
def test_big_import():
...
Then run fast tests by default:
pytest -m "not slow"
Linting, Formatting, and Static Analysis
Formatting and linting reduce cognitive load and prevent entire categories of bugs.
Why formatting matters
When code style is automatic:
- diffs are smaller
- reviews focus on logic
- fewer “style” debates
Example: Python (ruff + black)
Install:
python -m pip install -U ruff black
Format:
black src tests
Lint:
ruff check src tests
Auto-fix what’s safe:
ruff check --fix src tests
Example: JavaScript/TypeScript (eslint + prettier)
Install:
npm install --save-dev eslint prettier
Run:
npx eslint .
npx prettier -w .
Pre-commit hooks (enforce quality before pushing)
Install pre-commit:
python -m pip install -U pre-commit
pre-commit install
A minimal .pre-commit-config.yaml (note: still just a file; you’re not adding frontmatter to this tutorial):
repos:
- repo: https://github.com/psf/black
rev: 24.4.2
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.9
hooks:
- id: ruff
args: ["--fix"]
Run on all files:
pre-commit run --all-files
Logging, Configuration, and Secrets
Intermediate systems fail in production, and you need visibility. Logging and config management are foundational.
Logging: what to log (and what not to)
Log:
- request IDs / correlation IDs
- key state transitions
- warnings when assumptions are violated
- external API failures (with status codes)
Don’t log:
- passwords
- access tokens
- full credit card numbers
- sensitive personal data
Structured logs beat plain text
Instead of:
User 123 failed to login
Prefer structured:
{"event":"login_failed","user_id":123,"reason":"bad_password"}
Many platforms (ELK, Datadog, CloudWatch) handle JSON logs well.
Configuration layering
A robust approach:
- defaults in code
- config file (optional)
- environment variables override everything
Example (shell):
export LOG_LEVEL=DEBUG
export DATABASE_URL="postgresql://..."
Secrets management
At minimum:
- store secrets in environment variables in production
- use a secret manager if available (AWS Secrets Manager, GCP Secret Manager, Vault)
- rotate secrets periodically
If you must store local secrets, use tooling like pass, 1Password, or gopass, not plaintext files.
Documentation That Actually Helps
Intermediate documentation is less about long explanations and more about operational clarity.
A README that’s useful
Include:
- what the project does
- prerequisites
- setup steps
- common commands
- troubleshooting
- how to run tests and lint
Example command section:
# install
uv sync
# run locally
uv run python -m myapp.main
# test
uv run pytest
# lint/format
uv run ruff check src tests
uv run black src tests
Add a “Make it easy” command layer
Even if you don’t use Make, a single entrypoint reduces friction.
Example Makefile:
.PHONY: test lint fmt run
test:
pytest
lint:
ruff check src tests
fmt:
black src tests
run:
python -m myapp.main
Run:
make test
make lint
make fmt
make run
CI/CD: Automate Quality and Delivery
CI should be the place where your standards are enforced consistently.
What to run in CI
At minimum:
- install dependencies (from lock)
- lint + format check
- unit tests
- coverage threshold
- build artifacts (if applicable)
Example: GitHub Actions workflow
Create .github/workflows/ci.yml:
name: CI
on:
push:
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install deps
run: |
python -m pip install -U pip
python -m pip install -U ruff black pytest pytest-cov
python -m pip install -e .
- name: Lint
run: ruff check src tests
- name: Format check
run: black --check src tests
- name: Test
run: pytest --cov=src --cov-fail-under=85
Best practices:
- Keep CI fast (cache dependencies if needed).
- Fail early (lint before tests).
- Make CI the same commands developers run locally.
CD (delivery) principles
Even if you’re not deploying automatically yet:
- produce a versioned artifact (package, container image)
- store it in a registry
- deploy from artifacts, not from random commits on a machine
Containers and Local Parity
Containers help ensure your local environment matches production, especially for services with dependencies (DB, cache, queues).
Docker basics you should know
Build an image:
docker build -t myapp:dev .
Run it:
docker run --rm -p 8000:8000 myapp:dev
List containers:
docker ps
Stop a container:
docker stop <container_id>
Example Dockerfile (Python web app)
FROM python:3.12-slim
WORKDIR /app
COPY pyproject.toml /app/
COPY src /app/src
RUN pip install -U pip && pip install -e .
EXPOSE 8000
CMD ["python", "-m", "myapp.main"]
Docker Compose for dependencies
Example docker-compose.yml for Postgres:
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: user
POSTGRES_DB: mydb
ports:
- "5432:5432"
Start:
docker compose up -d
View logs:
docker compose logs -f
Stop:
docker compose down
Parity tip: If production uses Postgres 16, don’t develop on Postgres 12 “because it’s installed.”
Performance and Profiling
Intermediate performance work is about measuring before optimizing.
Establish a baseline
Time a command:
time python -m myapp.main
Load test an HTTP endpoint with curl in a loop (simple, not perfect):
for i in $(seq 1 50); do
curl -s -o /dev/null -w "%{time_total}\n" http://localhost:8000/health
done
Profiling (Python example)
Use cProfile:
python -m cProfile -o profile.out -m myapp.main
python -c "import pstats; p=pstats.Stats('profile.out'); p.sort_stats('cumtime').print_stats(30)"
What to look for:
- repeated expensive calls
- unnecessary I/O
- N+1 database queries
- inefficient serialization
Caching as a strategy (use carefully)
Cache only when:
- you know what’s slow
- the data is safe to reuse
- you have a clear invalidation strategy
A cache without invalidation is a bug factory.
Security Basics for Everyday Work
Security isn’t a separate phase. It’s a set of habits.
Dependency scanning
For Node:
npm audit
npm audit fix
For Python (pip-audit):
python -m pip install -U pip-audit
pip-audit
Principle of least privilege
- DB users should have only required permissions
- API keys should be scoped
- CI tokens should be minimal and rotated
Validate inputs and handle errors safely
- validate request payloads
- avoid leaking stack traces to users
- return generic errors externally, detailed logs internally
Use HTTPS and secure defaults
Even in dev, practice secure patterns. If you terminate TLS at a proxy in prod, document it and ensure internal traffic is still protected where needed.
Release Management and Versioning
Intermediate teams benefit from predictable releases.
Semantic Versioning (SemVer)
MAJOR: breaking changesMINOR: backward-compatible featuresPATCH: backward-compatible fixes
Tag releases in Git:
git tag -a v1.4.0 -m "Release v1.4.0"
git push origin v1.4.0
Generate changelogs from commits
If you adopt conventional commits, you can generate changelogs. Even without tooling, you can list changes between tags:
git log --oneline v1.3.0..v1.4.0
Release checklist (practical)
- CI is green
- versions bumped
- migrations reviewed (if any)
- changelog updated
- rollback plan exists
- monitoring/alerts in place
Next Steps: A Practical Growth Plan
Here’s a concrete plan to apply these ideas without getting overwhelmed.
Step 1: Standardize your local workflow (1–2 days)
- Add
Makefileorscripts/commands - Add formatter + linter
- Add
pre-commit - Document commands in README
Goal: one command each for test, lint, fmt, run.
Step 2: Add CI enforcement (1 day)
- Run the same commands in GitHub Actions (or your CI)
- Enforce formatting and linting
- Add coverage threshold
Goal: main branch is always shippable.
Step 3: Improve reliability (ongoing)
- Identify flaky tests and fix determinism
- Add structured logging
- Add request IDs and better error handling
Goal: failures become diagnosable, not mysterious.
Step 4: Prepare for production (1–3 days)
- Containerize (if appropriate)
- Use environment-based configuration
- Create
.env.example - Plan secrets management
Goal: “deploy” becomes a repeatable process.
Step 5: Level up collaboration (ongoing)
- Adopt a branching strategy
- Keep PRs small and reviewable
- Use
git bisect,revert, and--force-with-leaseresponsibly
Goal: changes are safer, faster, and easier to review.
Closing Notes
Intermediate practice is about building guardrails that prevent avoidable mistakes and reduce time spent debugging. If you implement only a few things from this guide, prioritize:
- automated formatting + linting
- reliable tests with fast feedback
- CI enforcing the same checks
- reproducible environments and documented commands
If you want, tell me your stack (e.g., Python + FastAPI, Node + Express, Go, Java) and whether you’re building a CLI, web app, or service, and I can tailor this guide into a concrete project template with exact files and commands.