Overview
Random fixes waste time and create new bugs. Quick patches mask underlying issues and make the next debugging session harder. The systematic approach — investigate first, form a hypothesis, test minimally, fix at the root — is both faster and more reliable than guess-and-check. Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure. From real debugging sessions: systematic approach = 15–30 minutes to fix. Random fixes approach = 2–3 hours of thrashing. First-time fix rate: 95% vs 40%.When to use
Use for ANY technical issue:- Test failures
- Bugs in production
- Unexpected behavior
- Performance problems
- Build failures
- Integration issues
- Under time pressure (emergencies make guessing tempting)
- “Just one quick fix” seems obvious
- You’ve already tried multiple fixes
- Previous fix didn’t work
- You don’t fully understand the issue
- Issue seems simple (simple bugs have root causes too)
- You’re in a hurry (rushing guarantees rework)
- It feels like you already know the answer
The four phases
You must complete each phase before proceeding to the next.Phase 1: Root cause investigation
Complete all of these before proposing any fix.1. Read error messages carefully.
Don’t skip past errors or warnings — they often contain the exact solution. Read stack traces completely. Note line numbers, file paths, and error codes.2. Reproduce consistently.
Can you trigger it reliably? What are the exact steps? Does it happen every time? If not reproducible: gather more data, don’t guess.3. Check recent changes.
What changed that could cause this? Git diff, recent commits, new dependencies, config changes, environmental differences.4. Gather evidence in multi-component systems.When the system has multiple components (CI → build → signing, API → service → database), add diagnostic instrumentation before proposing fixes:For each component boundary: log what data enters, log what data exits, verify environment and config propagation, check state at each layer. Run once to gather evidence showing where it breaks. Then analyze evidence to identify the failing component. Then investigate that specific component.5. Trace data flow.When an error is deep in the call stack, trace backward through the call chain to find the original trigger. Fix at the source, not at the symptom.Example trace:
- Error appears:
git init failed in /Users/jesse/project/packages/core - Immediate cause:
execFileAsync('git', ['init'], { cwd: projectDir })whereprojectDir = '' - Trace up:
WorktreeManager.createSessionWorktree→Session.initializeWorkspace→Session.create→ test atProject.create - Root cause: test accessed
context.tempDirbeforebeforeEachran —setupCoreTest()returned{ tempDir: '' }initially
tempDir a getter that throws if accessed too early), not at the git init call.Phase 2: Pattern analysis
Find the pattern before forming a hypothesis.Find working examples. Locate similar working code in the same codebase. What works that’s similar to what’s broken?Compare against references. If implementing a pattern, read the reference implementation completely — not skimming, every line. Understand the pattern fully before applying it.Identify differences. What’s different between working and broken? List every difference, however small. Don’t assume “that can’t matter.”Understand dependencies. What other components does this need? What settings, config, environment? What assumptions does it make?
Phase 3: Hypothesis and testing
Apply the scientific method.Form a single hypothesis. State it clearly: “I think X is the root cause because Y.” Write it down. Be specific, not vague.Test minimally. Make the smallest possible change to test the hypothesis. One variable at a time. Don’t fix multiple things at once.Verify before continuing. Did it work?
- Yes → proceed to Phase 4
- No → form a new hypothesis
- Do NOT add more fixes on top of a failed fix
Phase 4: Implementation
Fix the root cause, not the symptom.1. Create a failing test case.
Write the simplest possible reproduction. Use an automated test if there’s a test framework. The failing test must exist before the fix. See the test-driven-development skill for how to write proper failing tests.2. Implement a single fix.
Address the root cause identified in Phase 1. One change at a time. No “while I’m here” improvements. No bundled refactoring.3. Verify the fix.
Does the test pass now? Are other tests still passing? Is the issue actually resolved?4. If the fix doesn’t work — STOP.
Count how many fixes you’ve tried.
- Fewer than 3: return to Phase 1 and re-analyze with the new information
- 3 or more: question the architecture (see below)
- Each fix reveals new shared state, coupling, or a problem in a different place
- Fixes require “massive refactoring” to implement
- Each fix creates new symptoms elsewhere
- Is this pattern fundamentally sound?
- Are we “sticking with it through sheer inertia”?
- Should we refactor the architecture rather than continue fixing symptoms?
Common rationalizations
Rationalizations and why they're wrong
Rationalizations and why they're wrong
| Excuse | Reality |
|---|---|
| ”Issue is simple, don’t need process” | Simple issues have root causes too. Process is fast for simple bugs. |
| ”Emergency, no time for process” | Systematic debugging is FASTER than guess-and-check thrashing. |
| ”Just try this first, then investigate” | First fix sets the pattern. Do it right from the start. |
| ”I’ll write test after confirming fix works” | Untested fixes don’t stick. Test first proves it. |
| ”Multiple fixes at once saves time” | Can’t isolate what worked. Causes new bugs. |
| ”Reference too long, I’ll adapt the pattern” | Partial understanding guarantees bugs. Read it completely. |
| ”I see the problem, let me fix it” | Seeing symptoms ≠ understanding root cause. |
| ”One more fix attempt” (after 2+ failures) | 3+ failures = architectural problem. Question the pattern. |
Red flags — stop and follow process
Signs you're not doing it right
Signs you're not doing it right
If you catch yourself thinking any of these, stop and return to Phase 1:
- “Quick fix for now, investigate later”
- “Just try changing X and see if it works”
- “Add multiple changes, run tests”
- “Skip the test, I’ll manually verify”
- “It’s probably X, let me fix that”
- “I don’t fully understand but this might work”
- “Pattern says X but I’ll adapt it differently”
- “Here are the main problems: [lists fixes without investigation]”
- Proposing solutions before tracing data flow
- “One more fix attempt” (when already tried 2+)
- Each fix reveals a new problem in a different place
Quick reference
| Phase | Key activities | Success criteria |
|---|---|---|
| 1. Root cause | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY |
| 2. Pattern | Find working examples, compare | Identify all differences |
| 3. Hypothesis | Form single theory, test minimally | Confirmed or new hypothesis |
| 4. Implementation | Create test, fix at root, verify | Bug resolved, tests pass |
Supporting techniques
Root cause tracing
Trace bugs backward through the call chain to find the original trigger. Fix at the source, never at the symptom. When you can’t trace manually, add stack trace instrumentation:
console.error('DEBUG:', { directory, cwd: process.cwd(), stack: new Error().stack }). Use console.error() in tests — logger output may be suppressed.Defense in depth
After fixing a bug, add validation at every layer data passes through. Single validation at one place can be bypassed by different code paths, refactoring, or mocks. Layer 1: entry point validation. Layer 2: business logic. Layer 3: environment guards. Layer 4: debug instrumentation. All four layers were necessary in production — different code paths bypassed different checks.
Condition-based waiting
Replace arbitrary
setTimeout/sleep delays with polling for the actual condition you care about. Arbitrary delays create race conditions — tests pass on fast machines but fail under load or in CI. Use a waitFor polling function: check the condition every 10ms, throw with a clear error message after a timeout. From real debugging: fixed 15 flaky tests, pass rate 60% → 100%, execution time 40% faster.Test-driven development
For Phase 4, Step 1: create the failing test case that reproduces the bug before implementing the fix. See the test-driven development skill for writing proper failing tests. The test proves the fix works and prevents regression.
When the process reveals no root cause
If systematic investigation reveals the issue is truly environmental, timing-dependent, or external:- You’ve completed the process
- Document what you investigated
- Implement appropriate handling (retry, timeout, error message)
- Add monitoring and logging for future investigation