Intermediate Guide: Practical Strategies and Best Practices
This tutorial is an intermediate, hands-on guide to building reliable workflows in a modern software environment. It focuses on practical strategies you can apply immediately: structuring projects, using Git effectively, improving code quality, automating checks, securing secrets, optimizing performance, and operating services safely.
It is intentionally tool-agnostic in principles but uses real, widely available tools in examples (Git, Bash, Docker, CI, linters, test runners). You can adapt the same patterns to your stack.
Table of Contents
- 1. Core Mindset: Systems Over Heroics
- 2. Project Structure That Scales
- 3. Git Workflows: Branching, Commits, and Reviews
- 4. Dependency and Environment Management
- 5. Automated Quality: Formatting, Linting, Testing
- 6. CI/CD: Build Pipelines That Catch Problems Early
- 7. Secrets and Configuration: Doing It Safely
- 8. Logging, Metrics, and Tracing: Observability Basics
- 9. Performance and Reliability Techniques
- 10. Operational Best Practices: Deployments and Rollbacks
- 11. Documentation That Actually Helps
- 12. A Practical Checklist
1. Core Mindset: Systems Over Heroics
At an intermediate level, the biggest leap is shifting from “I can make it work” to “I can make it repeatably work for a team and for future me.”
Key principles
-
Optimize for repeatability
- If it can’t be repeated reliably, it’s not done.
- Favor scripts and automation over manual steps.
-
Prefer small, reversible changes
- Smaller pull requests are easier to review and safer to deploy.
- “Reversible” means you can roll back quickly, or toggle off via feature flags.
-
Make failures loud and early
- A failing test in CI is better than a bug in production.
- Validate inputs and assumptions at boundaries.
-
Treat time as a first-class constraint
- Build times, test times, and deploy times impact velocity.
- Invest in caching, incremental checks, and parallelization.
2. Project Structure That Scales
A clean structure reduces cognitive load, enables automation, and prevents “where does this go?” debates.
A practical baseline structure
repo/
README.md
docs/
scripts/
config/
src/
tests/
.gitignore
.editorconfig
Makefile
Why this works:
scripts/contains automation entry points (build, test, lint, release).config/holds non-secret configuration templates and examples.docs/holds deeper documentation than the README.src/andtests/are predictable targets for tooling.
Add a Makefile as a task gateway
Even if you don’t “love Make,” it provides a standard interface:
.PHONY: setup lint test build run
setup:
./scripts/setup.sh
lint:
./scripts/lint.sh
test:
./scripts/test.sh
build:
./scripts/build.sh
run:
./scripts/run.sh
Now everyone can run:
make setup
make lint
make test
make build
This reduces “tribal knowledge” and makes CI simpler.
3. Git Workflows: Branching, Commits, and Reviews
Branching strategy: keep it simple
A common intermediate pattern:
mainis always deployable.- Feature work happens on short-lived branches.
- Merge via Pull Request (PR) with checks.
Create a feature branch:
git checkout main
git pull --ff-only
git checkout -b feature/add-rate-limiter
Commit messages that scale
A good commit message answers:
- What changed?
- Why changed?
- Any constraints or follow-ups?
Example:
git commit -m "Add token bucket rate limiter to login endpoint
Prevents brute-force attempts and reduces load during spikes.
Follow-up: tune limits after observing production traffic."
Keep history clean: interactive rebase (carefully)
Before opening a PR, you can squash/fixup local commits:
git fetch origin
git rebase -i origin/main
In the editor, change pick to fixup for minor commits.
Rule: only rewrite history on branches you own (not shared branches).
Code review best practices
- Small PRs: aim for < 400 lines changed when possible.
- Review checklist:
- Correctness: edge cases, error handling
- Security: input validation, auth boundaries
- Performance: unnecessary queries, N+1 patterns
- Tests: meaningful coverage, stable tests
- Operability: logging, metrics, config
Use git bisect to find regressions
When something broke and you don’t know where:
git bisect start
git bisect bad # current commit is broken
git bisect good v1.4.2 # last known good tag
Git checks out a midpoint. Test it (run unit tests or reproduce bug), then mark:
git bisect good
# or
git bisect bad
When done:
git bisect reset
This can turn hours of guessing into minutes.
4. Dependency and Environment Management
Inconsistent environments cause “works on my machine” failures. The intermediate solution is to standardize.
Pin versions and make upgrades intentional
- Use lockfiles (e.g.,
package-lock.json,poetry.lock,Cargo.lock). - Pin runtime versions with tools like
asdf,nvm, or language-specific version files.
Example with asdf:
asdf plugin add nodejs
asdf install nodejs 20.11.1
asdf local nodejs 20.11.1
node -v
Commit .tool-versions:
nodejs 20.11.1
python 3.12.1
Use containers for parity (when appropriate)
A minimal Dockerfile example:
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["npm", "start"]
Build and run:
docker build -t myapp:dev .
docker run --rm -p 3000:3000 myapp:dev
Best practice: keep images small and deterministic:
- Prefer
npm ciovernpm installfor CI and containers. - Use multi-stage builds for compiled languages.
5. Automated Quality: Formatting, Linting, Testing
Automation prevents style debates and catches bugs early.
Formatting: make it non-optional
For JavaScript/TypeScript with Prettier:
npm install --save-dev prettier
npx prettier -w "src/**/*.{js,ts,json,md}"
Add a script in package.json:
{
"scripts": {
"format": "prettier -w .",
"format:check": "prettier -c ."
}
}
Linting: enforce correctness rules
Example with ESLint:
npm install --save-dev eslint
npx eslint --init
npx eslint "src/**/*.{js,ts}"
Testing: prioritize fast feedback
A balanced testing pyramid:
- Unit tests: fast, lots of them
- Integration tests: fewer, cover boundaries (DB, network)
- End-to-end tests: minimal, cover critical flows
Example commands:
- Node/Jest:
npm install --save-dev jest
npx jest --runInBand
- Python/pytest:
python -m venv .venv
source .venv/bin/activate
pip install pytest
pytest -q
Pre-commit hooks: shift left
Using pre-commit (language-agnostic):
pip install pre-commit
pre-commit install
Example .pre-commit-config.yaml (note: this is not YAML frontmatter; it’s a tool config file):
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-merge-conflict
Run on all files:
pre-commit run --all-files
Best practice: keep hooks fast; leave heavy checks to CI.
6. CI/CD: Build Pipelines That Catch Problems Early
A good pipeline:
- runs on every PR
- is deterministic
- caches dependencies
- blocks merges on failure
Example: GitHub Actions CI (real, minimal)
Create .github/workflows/ci.yml:
name: CI
on:
pull_request:
push:
branches: [ "main" ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install
run: npm ci
- name: Lint
run: npm run lint --if-present
- name: Test
run: npm test --if-present
- name: Format check
run: npm run format:check --if-present
CI best practices
- Fail fast: run lint/format before long tests.
- Cache dependencies: speeds up builds dramatically.
- Artifact uploads: store logs, coverage reports, build outputs.
- Parallelize: split unit/integration tests into separate jobs if needed.
CD: deploy with guardrails
Intermediate CD patterns:
- Deploy on merge to
main. - Use environment protection rules (manual approval for production).
- Use canary releases or phased rollouts if traffic is high.
Even if you don’t fully automate production deployment yet, automate:
- build
- test
- packaging
- staging deploy
7. Secrets and Configuration: Doing It Safely
Separate config from code
- Code is versioned.
- Config varies by environment.
- Secrets must never be committed.
Use environment variables for secrets:
export DATABASE_URL="postgres://user:pass@host:5432/db"
export API_KEY="..."
Use .env files locally (but never commit secrets)
Example .env:
DATABASE_URL=postgres://localhost:5432/mydb
API_KEY=dev-only-key
Add to .gitignore:
echo ".env" >> .gitignore
Detect leaked secrets
Use gitleaks:
brew install gitleaks
gitleaks detect --source . --no-git
Or scan the repository history:
gitleaks detect --source . --redact
If a secret is leaked:
- Rotate the secret immediately.
- Remove it from history if necessary (e.g.,
git filter-repo). - Add scanning to CI to prevent recurrence.
8. Logging, Metrics, and Tracing: Observability Basics
When systems grow, debugging becomes less about “reproducing locally” and more about “reading signals.”
Logging best practices
- Use structured logs (JSON) when possible.
- Include correlation IDs (request IDs).
- Log at appropriate levels:
INFO: high-level events (request start/stop, job completed)WARN: unexpected but handledERROR: failed operations requiring attention
Example: include a request ID in logs (conceptually):
# Generate an ID in a shell script (example)
REQ_ID="$(uuidgen)"
echo "{\"level\":\"info\",\"msg\":\"request received\",\"request_id\":\"$REQ_ID\"}"
Metrics: measure what matters
Start with:
- request rate (RPS)
- error rate
- latency (p50/p95/p99)
- saturation (CPU, memory, DB connections)
If you run containers, basic resource visibility:
docker stats
On Linux hosts:
top
free -m
iostat -xz 1
Tracing: follow a request across services
Distributed tracing becomes essential with microservices. Even in a monolith, internal spans can reveal slow operations.
Intermediate practice:
- instrument external calls (DB, HTTP)
- propagate trace IDs through headers
- sample intelligently (not every request at high volume)
9. Performance and Reliability Techniques
Performance: fix the biggest bottleneck first
Use profiling and measurement rather than assumptions.
Measure API latency
If you have an HTTP endpoint:
curl -o /dev/null -s -w "time_total=%{time_total}\nhttp_code=%{http_code}\n" http://localhost:3000/health
Load testing with hey:
brew install hey
hey -n 2000 -c 50 http://localhost:3000/api/items
Common performance wins
- Add caching (in-memory, Redis) for expensive repeated reads.
- Reduce chatty DB patterns (batch queries, avoid N+1).
- Use pagination and limits by default.
- Compress responses (gzip/brotli) if appropriate.
- Avoid unnecessary work on hot paths (move to background jobs).
Reliability: design for failure
Failures are inevitable: networks fail, disks fill, dependencies time out.
Timeouts and retries (with backoff)
Rules of thumb:
- Always set timeouts on outbound calls.
- Retry only idempotent operations (GET, safe PUT) unless you have deduplication.
- Use exponential backoff with jitter.
Even at the shell level, you can demonstrate retry logic:
for i in 1 2 3 4 5; do
curl -fsS --max-time 2 http://localhost:3000/health && break
sleep $((i*i))
done
Circuit breakers and bulkheads
- Circuit breaker: stop calling a failing dependency temporarily.
- Bulkhead: isolate resources (thread pools, connection pools) so one failure doesn’t take down everything.
Intermediate implementation often starts with:
- separate connection pools per dependency
- queue limits
- worker concurrency caps
10. Operational Best Practices: Deployments and Rollbacks
Deployment strategies
-
Rolling deployment
- Replace instances gradually.
- Works well with stateless services.
-
Blue/green
- Deploy to “green,” switch traffic from “blue.”
- Fast rollback: switch back.
-
Canary
- Send a small percentage of traffic to new version.
- Increase gradually if healthy.
Health checks and readiness
A service should expose:
- liveness: “process is alive”
- readiness: “ready to receive traffic” (DB connected, migrations done, caches warmed)
You can simulate a readiness check with curl:
curl -f http://localhost:3000/ready
Database migrations: treat as production code
Best practices:
- Back up before risky migrations.
- Prefer additive changes (add columns) over destructive changes (drop columns) in the same deploy.
- Use expand/contract:
- Expand schema (add new column/table)
- Deploy code that writes both old/new
- Backfill data
- Switch reads to new
- Contract (remove old)
Rollback plan
Before deploying, explicitly answer:
- What is the rollback trigger? (error rate, latency threshold)
- How do we roll back? (previous image tag, previous release)
- Are migrations reversible? If not, do we need a forward fix?
A practical container rollback pattern is “deploy by immutable tag”:
docker pull myregistry/myapp:1.8.3
docker pull myregistry/myapp:1.8.4
If 1.8.4 is bad, redeploy 1.8.3 quickly.
11. Documentation That Actually Helps
Documentation is a force multiplier when it is:
- discoverable
- current
- task-oriented
Minimum docs that pay off
- README.md: what it is, how to run, how to test, how to deploy (high-level)
- docs/architecture.md: key components, data flow, dependencies
- docs/runbook.md: operational procedures, common incidents, rollback steps
Example: a runbook outline
## Common alerts
### High error rate
**Symptoms:** 5xx spike, failed requests
**Immediate actions:**
1. Check deploy status and recent changes
2. Inspect logs for top error messages
3. Roll back if error rate > 5% for 10 minutes
**Commands:**
- View recent logs: `kubectl logs deploy/myapp -n prod --tail=200`
- Roll back: `kubectl rollout undo deploy/myapp -n prod`
(If you are not using Kubernetes, replace commands with your platform equivalents.)
12. A Practical Checklist
Use this as a “definition of done” for intermediate-quality work.
Code and design
- Clear boundaries and responsibilities
- Input validation at edges
- Errors handled intentionally (not swallowed)
- Performance impact considered on hot paths
Git and collaboration
- Small PR with clear description and screenshots/logs when relevant
- Commit messages explain why
- No secrets committed;
.envignored
Quality automation
- Formatter configured and enforced
- Linter configured and enforced
- Unit tests for core logic
- Integration tests for critical boundaries
- Pre-commit hooks (fast checks)
CI/CD
- CI runs on PR and blocks merges on failure
- Dependency caching enabled
- Artifacts/logs accessible for debugging
- Deployment process documented and repeatable
Operability
- Structured logs with request IDs
- Basic metrics: RPS, error rate, latency
- Health checks (liveness/readiness)
- Rollback plan exists and is tested
Closing Notes: How to Apply This Without Overengineering
The goal is not to adopt every tool at once. The goal is to create a workflow where:
- changes are small and safe,
- quality is automated,
- deployments are repeatable,
- incidents are diagnosable.
A recommended adoption order:
- Add
Makefile+scripts/entry points - Add formatter + linter + unit tests
- Add CI that runs those checks
- Add secret scanning +
.envhygiene - Add basic logging/metrics and a runbook
- Improve deployment strategy and rollback confidence
If you share your tech stack (language, framework, deployment target), I can adapt the commands and examples to your exact environment while keeping the same best-practice structure.