← Back to Tutorials

Intermediate Guide: Practical Tips, Examples, and Best Practices

intermediateguidebest-practicestipsexamples

Intermediate Guide: Practical Tips, Examples, and Best Practices

This tutorial is designed for people who already know the basics of working in a terminal, writing small scripts, using Git, and deploying simple apps—but want to become reliably productive in real projects. It focuses on practical workflows, concrete commands, and best practices you can apply immediately.


Table of Contents


1. Mindset: What “Intermediate” Really Means

Being intermediate is less about knowing many tools and more about:

A common trap is chasing advanced topics (distributed tracing, complex CI, microservices) before mastering the boring fundamentals: reproducible environments, clean Git habits, and dependable automation.


2. A Strong Terminal Workflow

A strong terminal workflow is a force multiplier. The goal is to reduce friction: fewer manual steps, fewer mistakes, faster inspection.

2.1 Shell fundamentals that pay off

Learn these primitives well:

Example: run a command and fail the script if it fails:

some_command
echo "exit code was $?"

In scripts, prefer:

set -euo pipefail

Meaning:

Example script:

#!/usr/bin/env bash
set -euo pipefail

: "${API_URL:?Must set API_URL}"
curl -fsS "$API_URL/health" > /dev/null
echo "OK"

The : "${VAR:?message}" pattern is a reliable way to enforce required env vars.

2.2 Safer command execution

Intermediate users treat destructive operations as “danger zones”.

Preview before delete:

# Preview what would be deleted
find . -name "*.tmp" -print

# Then delete
find . -name "*.tmp" -delete

Use rm carefully:

A safe pattern is to print the target first:

target="./build"
echo "About to remove: $target"
rm -rf -- "$target"

-- prevents weird paths from being interpreted as flags.

2.3 Finding and inspecting things fast

Find files by name:

find . -maxdepth 3 -type f -name "*.md"

Search inside files:

# ripgrep is faster and nicer than grep for many workflows
rg "TODO|FIXME" -n .

Inspect file sizes:

du -sh .
du -sh node_modules 2>/dev/null || true

Check what’s listening on a port:

# Linux
ss -ltnp | rg ":3000"

# macOS alternative
lsof -nP -iTCP:3000 -sTCP:LISTEN

See processes and resource usage:

ps aux | rg "python|node"
top

2.4 Text processing patterns

Text processing is a superpower because logs, JSON, and CLI output are everywhere.

Extract columns with awk:

# Show the second column from `ls -l`
ls -l | awk '{print $9}'

Count unique values:

cat access.log | awk '{print $1}' | sort | uniq -c | sort -nr | head

Work with JSON using jq:

curl -sS https://api.github.com/repos/nodejs/node | jq '{full_name, stargazers_count, open_issues}'

Filter arrays:

curl -sS https://api.github.com/repos/nodejs/node/issues?per_page=5 \
  | jq -r '.[].title'

Common best practice: keep raw output and derived output separate:

curl -sS "$API_URL/data" > data.raw.json
jq '.items | length' data.raw.json > item_count.txt

This makes debugging easier because you can re-run transformations without re-fetching.


3. Git: Practical Branching, Debugging, and Recovery

Git proficiency is one of the biggest “intermediate” differentiators. The goal is not fancy rebases—it’s confidence: you can experiment without fear.

3.1 Clean history without perfectionism

A clean history helps code review and future debugging. But avoid spending hours polishing commits.

A practical approach:

Create a branch:

git switch -c feature/better-logging

Commit with a clear message:

git add src/logger.js
git commit -m "Add request id to logs"

A helpful commit message format:

3.2 Stashing, patching, and partial commits

Stash uncommitted work:

git stash push -m "WIP: debug flaky test"

List and apply:

git stash list
git stash pop

Partial staging (highly recommended):

git add -p

This lets you split unrelated changes into separate commits, which improves review and reduces rollback risk.

Checkout a single file from another branch:

git checkout main -- README.md

(Modern Git also supports git restore --source main README.md.)

3.3 Undoing mistakes safely

Undoing is where intermediate Git habits matter most.

Undo local changes to a file:

git restore src/app.js

Undo a commit but keep changes staged:

git reset --soft HEAD~1

Undo a commit and discard changes (dangerous):

git reset --hard HEAD~1

Revert a commit in shared history (safe for main branches):

git revert <commit_sha>

This creates a new commit that reverses the old one, preserving history.

Recover “lost” commits with reflog:

git reflog
git switch -c recover-branch <sha_from_reflog>

If you ever think “Git ate my work,” reflog is often the solution.

3.4 Investigating regressions

When something broke “somewhere between last week and now”, use git bisect:

git bisect start
git bisect bad
git bisect good v1.2.0

Git will checkout commits for you to test. After testing each commit:

git bisect good
# or
git bisect bad

When done:

git bisect reset

Best practice: automate the test used during bisect:

git bisect start
git bisect bad
git bisect good v1.2.0
git bisect run bash -c 'npm test'

4. Dependency Management and Reproducible Environments

Intermediate projects fail in subtle ways when environments drift. Your goal is reproducibility:

4.1 Node.js (npm) example

Use npm ci in CI and often in local clean setups. It installs exactly what’s in package-lock.json.

rm -rf node_modules
npm ci
npm test

Add scripts to package.json:

{
  "scripts": {
    "test": "node --test",
    "lint": "eslint .",
    "format": "prettier -w .",
    "start": "node src/server.js"
  }
}

Run:

npm run lint
npm run format
npm start

Best practice: pin Node version via .nvmrc:

node -v
echo "20.11.1" > .nvmrc

Then:

nvm use

4.2 Python (venv/pip) example

Create and activate a venv:

python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip

Install dependencies:

pip install -r requirements.txt

Freeze exact versions:

pip freeze > requirements.lock.txt

A practical approach:

Then in CI:

pip install -r requirements.lock.txt
pytest -q

4.3 Containers: when and why

Containers are useful when:

A minimal Dockerfile example for Node:

FROM node:20-alpine

WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --omit=dev

COPY . .
EXPOSE 3000
CMD ["node", "src/server.js"]

Build and run:

docker build -t myapp:local .
docker run --rm -p 3000:3000 myapp:local

Best practice: keep images small and builds cache-friendly by copying lockfiles first.


5. Configuration and Secrets: Practical Safety

A common intermediate mistake is mixing configuration into code or committing secrets.

Use environment variables for configuration:

export PORT=3000
export DATABASE_URL="postgres://user:pass@localhost:5432/app"

In many frameworks, you can load .env files for local development. The important part is:

Example:

cat > .env.example <<'EOF'
PORT=3000
DATABASE_URL=postgres://user:password@localhost:5432/app
LOG_LEVEL=info
EOF

Add to .gitignore:

printf "\n.env\n" >> .gitignore

Never log secrets. If you must log configuration, redact:


6. Debugging: A Repeatable Process

Debugging skill is mostly process, not brilliance.

A reliable loop:

  1. Observe the failure precisely (error message, stack trace, logs)
  2. Reproduce it consistently
  3. Reduce the scope (minimal reproduction)
  4. Isolate the cause (binary search, toggles, bisect)
  5. Fix
  6. Prevent (tests, monitoring, runbooks)

6.1 Logging with intent

Good logs answer:

Add a request id (or correlation id). Example in a Node/Express-like app:

import crypto from "node:crypto";

export function requestId(req, res, next) {
  req.id = crypto.randomUUID();
  res.setHeader("x-request-id", req.id);
  next();
}

Log with context:

console.log(JSON.stringify({
  level: "info",
  msg: "user login",
  requestId: req.id,
  userId,
  ok: true
}));

Best practice: structured logs (JSON) are easier to search and parse.

6.2 Reproduce, reduce, isolate

If a bug only happens “sometimes,” you need to make it happen “always”:

Example: save a response and replay parsing:

curl -sS "$API_URL/problem" > failing.json
node scripts/parse.js failing.json

Reduction: remove fields until it still fails. This often reveals the trigger.

Isolation: turn off parts of the system:

6.3 CLI debugging tools

HTTP debugging with curl:

curl -v https://example.com
curl -i -X POST "$API_URL/login" -H "content-type: application/json" \
  -d '{"email":"a@example.com","password":"wrong"}'

Measure timing:

curl -o /dev/null -sS -w "dns=%{time_namelookup} connect=%{time_connect} total=%{time_total}\n" \
  https://example.com

Check DNS:

dig example.com +short

Inspect TLS certificate:

echo | openssl s_client -connect example.com:443 -servername example.com 2>/dev/null | openssl x509 -noout -issuer -subject -dates

7. Testing: What to Test and How to Keep It Useful

Testing is not about maximum coverage; it’s about confidence per minute spent.

7.1 Unit vs integration vs end-to-end

A practical balance:

7.2 Test structure and naming

Good tests read like documentation:

Example (pseudo-JS):

test("parses ISO date strings as UTC", () => {
  const input = "2026-01-01T00:00:00Z";
  const result = parseDate(input);
  expect(result.toISOString()).toBe(input);
});

Best practice: test names describe behavior, not implementation.

7.3 Running tests like a pro

Run a single test file:

pytest tests/test_auth.py -q

Run tests matching a keyword:

pytest -k "login" -q

For Node’s built-in test runner:

node --test test/auth.test.js

In CI, prefer the same command developers run locally. Avoid “special CI commands” that drift.


8. Code Quality: Linters, Formatters, and Review Habits

Linters and formatters reduce cognitive load and code review noise.

A practical workflow:

  1. format on save in editor
  2. run linter before commit
  3. CI enforces both

Example with Node:

npm run format
npm run lint

Code review best practices (intermediate-level):


9. Automation: Makefile, npm scripts, and Task Runners

Automation is how you turn “tribal knowledge” into repeatable commands.

Makefile example

Create a Makefile:

.PHONY: install test lint format clean

install:
	npm ci

test:
	npm test

lint:
	npm run lint

format:
	npm run format

clean:
	rm -rf node_modules dist

Then you can run:

make install
make test
make lint

Why this is useful:


10. Deployment Basics: Build, Release, Observe

Intermediate deployment isn’t about complex infrastructure; it’s about reliable releases.

A simple mental model:

10.1 A minimal release checklist

Before releasing:

Example: run a typical pre-release sequence:

git switch main
git pull --ff-only
npm ci
npm test
npm run build

Tag a release:

git tag -a v1.3.0 -m "Release v1.3.0"
git push origin v1.3.0

10.2 Observability essentials

At minimum, you want:

A health check example:

curl -fsS "$API_URL/health" && echo "healthy"

Best practice: health checks should verify dependencies that matter (DB connectivity) but avoid expensive operations.


11. Performance and Reliability: Practical Patterns

Performance work should be driven by measurement.

Measure first:

Example: quick load testing with hey (if installed):

hey -n 2000 -c 50 https://example.com/api/items

Common reliability patterns:

Even without fancy libraries, you can enforce timeouts in curl:

curl --max-time 5 -fsS "$API_URL/health"

12. Documentation That Actually Helps

Documentation should reduce repeated questions and speed up onboarding.

A useful README.md typically includes:

A strong pattern is to include a “Quickstart” section with copy/paste commands:

git clone https://github.com/you/project.git
cd project
npm ci
cp .env.example .env
npm test
npm start

Best practice: keep docs close to code; update docs in the same PR as changes.


13. A Worked Example: From Repo to Release

This section ties together the practices above with a realistic workflow.

Step 1: Clone and bootstrap

git clone https://github.com/you/myapp.git
cd myapp
npm ci
cp .env.example .env

Verify:

npm test
npm start

If port 3000 is in use:

lsof -nP -iTCP:3000 -sTCP:LISTEN
export PORT=3001
npm start

Step 2: Create a feature branch and implement

git switch -c feature/add-health-details

Edit code. Then run:

npm run lint
npm test

Commit in small chunks:

git add -p
git commit -m "Expand /health response with version"

Step 3: Handle a mistake safely

If you committed something you didn’t mean to (but haven’t pushed):

git reset --soft HEAD~1
git restore --staged .
git add -p
git commit -m "Correct commit contents"

Step 4: Rebase or merge thoughtfully

If your team prefers rebasing feature branches:

git fetch origin
git rebase origin/main

Resolve conflicts, then:

npm test

Push:

git push -u origin feature/add-health-details

Step 5: Release

After merge to main:

git switch main
git pull --ff-only
npm ci
npm test
git tag -a v1.4.0 -m "Release v1.4.0"
git push origin v1.4.0

Deploy using your platform’s mechanism (CI/CD, container registry, etc.). Then verify:

curl -fsS "$API_URL/health" | jq .

14. Quick Reference: Commands and Patterns

Terminal

# Search text recursively
rg "pattern" .

# Find files
find . -type f -name "*.log"

# Show listening ports
ss -ltnp

# JSON processing
curl -sS "$API_URL/data" | jq '.items[] | {id, name}'

Git

# New branch
git switch -c feature/x

# Partial stage
git add -p

# Undo local changes
git restore path/to/file

# Safe undo in shared history
git revert <sha>

# Recover work
git reflog

Node

npm ci
npm test
npm run lint
npm run format

Docker

docker build -t app:local .
docker run --rm -p 3000:3000 app:local

Closing Notes

Intermediate skill comes from building habits that scale: reproducible environments, safe Git workflows, meaningful logs, and automation that makes the right thing easy. If you apply even a few of the practices here—git add -p, npm ci, structured logs, and a release checklist—you’ll notice fewer “mystery failures” and faster progress on real projects.