I keep ending up back in Bash. Not because it's elegant — because it's already on every machine I touch. These are the patterns I reuse.
Safe defaults at the top of every script
#!/usr/bin/env bash
set -Eeuo pipefail
IFS=$'\n\t'-e exits on error, -u on undefined variables, -o pipefail makes piped failures propagate, and -E makes traps inherit into subshells. The IFS reset prevents word-splitting surprises. This single block catches more bugs than every linter combined.
Argument parsing without a framework
For scripts with more than two flags, getopts is enough:
DRY_RUN=0
VERBOSE=0
while getopts "nvh" opt; do
case $opt in
n) DRY_RUN=1 ;;
v) VERBOSE=1 ;;
h) usage; exit 0 ;;
*) usage; exit 1 ;;
esac
done
shift $((OPTIND - 1))Single-letter flags only, but that's a feature — scripts stay small and their surface stays narrow.
Dry-run mode as a first-class feature
Every script that mutates anything gets -n:
run() {
if [ "$DRY_RUN" = 1 ]; then
echo "+ $*"
else
"$@"
fi
}
run rm -rf "$build_dir"This turns "run it and hope" into "run it, see what it would do, then run it for real." The cost is wrapping destructive calls in run. The benefit is never having to restore from backup.
Structured logging
log() { printf '[%s] %s\n' "$(date +%H:%M:%S)" "$*" >&2; }
warn() { printf '[%s] WARN: %s\n' "$(date +%H:%M:%S)" "$*" >&2; }
die() { printf '[%s] FATAL: %s\n' "$(date +%H:%M:%S)" "$*" >&2; exit 1; }Three functions, three severity levels, stderr only. This is 80% of what you actually need from a logging library.
When to stop reaching for Bash
Once a script crosses ~150 lines or needs JSON, stop. Rewrite in Python. Bash's cost curve goes exponential the moment you need arrays of arrays or error handling more sophisticated than "exit."