← Back to Tutorials

Intermediate Guide: Practical Strategies and Best Practices

intermediatebest-practicesstrategyworkflowhow-totips

Intermediate Guide: Practical Strategies and Best Practices

This tutorial is an intermediate, hands-on guide to building reliable workflows in a modern software environment. It focuses on practical strategies you can apply immediately: structuring projects, using Git effectively, improving code quality, automating checks, securing secrets, optimizing performance, and operating services safely.

It is intentionally tool-agnostic in principles but uses real, widely available tools in examples (Git, Bash, Docker, CI, linters, test runners). You can adapt the same patterns to your stack.


Table of Contents


1. Core Mindset: Systems Over Heroics

At an intermediate level, the biggest leap is shifting from “I can make it work” to “I can make it repeatably work for a team and for future me.”

Key principles

  1. Optimize for repeatability

    • If it can’t be repeated reliably, it’s not done.
    • Favor scripts and automation over manual steps.
  2. Prefer small, reversible changes

    • Smaller pull requests are easier to review and safer to deploy.
    • “Reversible” means you can roll back quickly, or toggle off via feature flags.
  3. Make failures loud and early

    • A failing test in CI is better than a bug in production.
    • Validate inputs and assumptions at boundaries.
  4. Treat time as a first-class constraint

    • Build times, test times, and deploy times impact velocity.
    • Invest in caching, incremental checks, and parallelization.

2. Project Structure That Scales

A clean structure reduces cognitive load, enables automation, and prevents “where does this go?” debates.

A practical baseline structure

repo/
  README.md
  docs/
  scripts/
  config/
  src/
  tests/
  .gitignore
  .editorconfig
  Makefile

Why this works:

Add a Makefile as a task gateway

Even if you don’t “love Make,” it provides a standard interface:

.PHONY: setup lint test build run

setup:
	./scripts/setup.sh

lint:
	./scripts/lint.sh

test:
	./scripts/test.sh

build:
	./scripts/build.sh

run:
	./scripts/run.sh

Now everyone can run:

make setup
make lint
make test
make build

This reduces “tribal knowledge” and makes CI simpler.


3. Git Workflows: Branching, Commits, and Reviews

Branching strategy: keep it simple

A common intermediate pattern:

Create a feature branch:

git checkout main
git pull --ff-only
git checkout -b feature/add-rate-limiter

Commit messages that scale

A good commit message answers:

Example:

git commit -m "Add token bucket rate limiter to login endpoint

Prevents brute-force attempts and reduces load during spikes.
Follow-up: tune limits after observing production traffic."

Keep history clean: interactive rebase (carefully)

Before opening a PR, you can squash/fixup local commits:

git fetch origin
git rebase -i origin/main

In the editor, change pick to fixup for minor commits.

Rule: only rewrite history on branches you own (not shared branches).

Code review best practices

Use git bisect to find regressions

When something broke and you don’t know where:

git bisect start
git bisect bad                # current commit is broken
git bisect good v1.4.2         # last known good tag

Git checks out a midpoint. Test it (run unit tests or reproduce bug), then mark:

git bisect good
# or
git bisect bad

When done:

git bisect reset

This can turn hours of guessing into minutes.


4. Dependency and Environment Management

Inconsistent environments cause “works on my machine” failures. The intermediate solution is to standardize.

Pin versions and make upgrades intentional

Example with asdf:

asdf plugin add nodejs
asdf install nodejs 20.11.1
asdf local nodejs 20.11.1
node -v

Commit .tool-versions:

nodejs 20.11.1
python 3.12.1

Use containers for parity (when appropriate)

A minimal Dockerfile example:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["npm", "start"]

Build and run:

docker build -t myapp:dev .
docker run --rm -p 3000:3000 myapp:dev

Best practice: keep images small and deterministic:


5. Automated Quality: Formatting, Linting, Testing

Automation prevents style debates and catches bugs early.

Formatting: make it non-optional

For JavaScript/TypeScript with Prettier:

npm install --save-dev prettier
npx prettier -w "src/**/*.{js,ts,json,md}"

Add a script in package.json:

{
  "scripts": {
    "format": "prettier -w .",
    "format:check": "prettier -c ."
  }
}

Linting: enforce correctness rules

Example with ESLint:

npm install --save-dev eslint
npx eslint --init
npx eslint "src/**/*.{js,ts}"

Testing: prioritize fast feedback

A balanced testing pyramid:

Example commands:

npm install --save-dev jest
npx jest --runInBand
python -m venv .venv
source .venv/bin/activate
pip install pytest
pytest -q

Pre-commit hooks: shift left

Using pre-commit (language-agnostic):

pip install pre-commit
pre-commit install

Example .pre-commit-config.yaml (note: this is not YAML frontmatter; it’s a tool config file):

repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.6.0
    hooks:
      - id: end-of-file-fixer
      - id: trailing-whitespace
      - id: check-merge-conflict

Run on all files:

pre-commit run --all-files

Best practice: keep hooks fast; leave heavy checks to CI.


6. CI/CD: Build Pipelines That Catch Problems Early

A good pipeline:

Example: GitHub Actions CI (real, minimal)

Create .github/workflows/ci.yml:

name: CI
on:
  pull_request:
  push:
    branches: [ "main" ]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: "20"
          cache: "npm"

      - name: Install
        run: npm ci

      - name: Lint
        run: npm run lint --if-present

      - name: Test
        run: npm test --if-present

      - name: Format check
        run: npm run format:check --if-present

CI best practices

CD: deploy with guardrails

Intermediate CD patterns:

Even if you don’t fully automate production deployment yet, automate:


7. Secrets and Configuration: Doing It Safely

Separate config from code

Use environment variables for secrets:

export DATABASE_URL="postgres://user:pass@host:5432/db"
export API_KEY="..."

Use .env files locally (but never commit secrets)

Example .env:

DATABASE_URL=postgres://localhost:5432/mydb
API_KEY=dev-only-key

Add to .gitignore:

echo ".env" >> .gitignore

Detect leaked secrets

Use gitleaks:

brew install gitleaks
gitleaks detect --source . --no-git

Or scan the repository history:

gitleaks detect --source . --redact

If a secret is leaked:

  1. Rotate the secret immediately.
  2. Remove it from history if necessary (e.g., git filter-repo).
  3. Add scanning to CI to prevent recurrence.

8. Logging, Metrics, and Tracing: Observability Basics

When systems grow, debugging becomes less about “reproducing locally” and more about “reading signals.”

Logging best practices

Example: include a request ID in logs (conceptually):

# Generate an ID in a shell script (example)
REQ_ID="$(uuidgen)"
echo "{\"level\":\"info\",\"msg\":\"request received\",\"request_id\":\"$REQ_ID\"}"

Metrics: measure what matters

Start with:

If you run containers, basic resource visibility:

docker stats

On Linux hosts:

top
free -m
iostat -xz 1

Tracing: follow a request across services

Distributed tracing becomes essential with microservices. Even in a monolith, internal spans can reveal slow operations.

Intermediate practice:


9. Performance and Reliability Techniques

Performance: fix the biggest bottleneck first

Use profiling and measurement rather than assumptions.

Measure API latency

If you have an HTTP endpoint:

curl -o /dev/null -s -w "time_total=%{time_total}\nhttp_code=%{http_code}\n" http://localhost:3000/health

Load testing with hey:

brew install hey
hey -n 2000 -c 50 http://localhost:3000/api/items

Common performance wins

Reliability: design for failure

Failures are inevitable: networks fail, disks fill, dependencies time out.

Timeouts and retries (with backoff)

Rules of thumb:

Even at the shell level, you can demonstrate retry logic:

for i in 1 2 3 4 5; do
  curl -fsS --max-time 2 http://localhost:3000/health && break
  sleep $((i*i))
done

Circuit breakers and bulkheads

Intermediate implementation often starts with:


10. Operational Best Practices: Deployments and Rollbacks

Deployment strategies

  1. Rolling deployment

    • Replace instances gradually.
    • Works well with stateless services.
  2. Blue/green

    • Deploy to “green,” switch traffic from “blue.”
    • Fast rollback: switch back.
  3. Canary

    • Send a small percentage of traffic to new version.
    • Increase gradually if healthy.

Health checks and readiness

A service should expose:

You can simulate a readiness check with curl:

curl -f http://localhost:3000/ready

Database migrations: treat as production code

Best practices:

Rollback plan

Before deploying, explicitly answer:

A practical container rollback pattern is “deploy by immutable tag”:

docker pull myregistry/myapp:1.8.3
docker pull myregistry/myapp:1.8.4

If 1.8.4 is bad, redeploy 1.8.3 quickly.


11. Documentation That Actually Helps

Documentation is a force multiplier when it is:

Minimum docs that pay off

  1. README.md: what it is, how to run, how to test, how to deploy (high-level)
  2. docs/architecture.md: key components, data flow, dependencies
  3. docs/runbook.md: operational procedures, common incidents, rollback steps

Example: a runbook outline

## Common alerts

### High error rate
**Symptoms:** 5xx spike, failed requests  
**Immediate actions:**
1. Check deploy status and recent changes
2. Inspect logs for top error messages
3. Roll back if error rate > 5% for 10 minutes

**Commands:**
- View recent logs: `kubectl logs deploy/myapp -n prod --tail=200`
- Roll back: `kubectl rollout undo deploy/myapp -n prod`

(If you are not using Kubernetes, replace commands with your platform equivalents.)


12. A Practical Checklist

Use this as a “definition of done” for intermediate-quality work.

Code and design

Git and collaboration

Quality automation

CI/CD

Operability


Closing Notes: How to Apply This Without Overengineering

The goal is not to adopt every tool at once. The goal is to create a workflow where:

A recommended adoption order:

  1. Add Makefile + scripts/ entry points
  2. Add formatter + linter + unit tests
  3. Add CI that runs those checks
  4. Add secret scanning + .env hygiene
  5. Add basic logging/metrics and a runbook
  6. Improve deployment strategy and rollback confidence

If you share your tech stack (language, framework, deployment target), I can adapt the commands and examples to your exact environment while keeping the same best-practice structure.