Overview
Test-driven development (TDD) is a discipline where every piece of production code is preceded by a failing test that proves it’s needed. The cycle is short, tight, and non-negotiable: write one failing test, watch it fail for the right reason, write the minimal code to pass it, then refactor while staying green. Core principle: If you didn’t watch the test fail, you don’t know if it tests the right thing. Violating the letter of the rules is violating the spirit of the rules.When to use
Always:- New features
- Bug fixes
- Refactoring
- Behavior changes
- Throwaway prototypes
- Generated code
- Configuration files
The RED-GREEN-REFACTOR cycle
RED — Write a failing test
Write one minimal test that shows exactly what should happen. Run it and confirm it fails for the right reason.Clear name, tests real behavior, one thing. Uses real code, not a mock.Requirements for a good test:Confirm:
- Good test
- Bad test
- One behavior only — if “and” appears in the name, split it
- Clear name that describes the behavior
- Real code (no mocks unless truly unavoidable)
- Test fails (not errors out)
- Failure message is the expected one
- Fails because the feature is missing, not because of a typo
GREEN — Write minimal code
Write the simplest code that makes the test pass. Nothing more.Just enough to pass the test. No configurable retry counts, no backoff strategies, no callbacks.Don’t add features, refactor other code, or “improve” anything beyond what the test requires.Verify GREEN — mandatory:Confirm:
- Good implementation
- Over-engineered (YAGNI)
- The test passes
- All other tests still pass
- Output is pristine (no errors, no warnings)
REFACTOR — Clean up
After all tests are green, clean up the code:
- Remove duplication
- Improve names
- Extract helpers
Example: fixing a bug with TDD
Bug: Empty email is accepted by the form. REDWhy order matters
The philosophical difference between tests-first and tests-after is not stylistic — it’s fundamental. Tests written after code pass immediately. A test that passes the moment you write it proves nothing:- It might test the wrong thing
- It might test implementation details, not behavior
- It might miss edge cases you forgot while building
- You never saw it catch the actual bug
Common rationalizations
Rationalizations and why they're wrong
Rationalizations and why they're wrong
| Excuse | Reality |
|---|---|
| ”Too simple to test” | Simple code breaks. The test takes 30 seconds. |
| ”I’ll test after” | Tests passing immediately prove nothing. |
| ”Tests after achieve the same goals” | Tests-after = “what does this do?” Tests-first = “what should this do?" |
| "Already manually tested” | Ad-hoc ≠ systematic. No record, can’t re-run. |
| ”Deleting X hours is wasteful” | Sunk cost fallacy. Keeping unverified code is technical debt. |
| ”Keep as reference, write tests first” | You’ll adapt it. That’s testing after. Delete means delete. |
| ”Need to explore first” | Fine. Throw away exploration, start fresh with TDD. |
| ”Test is hard to write = design unclear” | Listen to the test. Hard to test = hard to use. |
| ”TDD will slow me down” | TDD is faster than debugging. Pragmatic = test-first. |
| ”Manual testing is faster” | Manual doesn’t prove edge cases. You’ll re-test every change. |
| ”Existing code has no tests” | You’re improving it. Add tests for what you touch. |
Red flags — stop and start over
Signs you've violated TDD
Signs you've violated TDD
Any of these means: delete the code and start over with TDD.
- Code written before the test
- Test written after implementation
- Test passes immediately (without watching it fail first)
- Can’t explain why the test failed
- Tests added “later”
- Rationalizing “just this once”
- “I already manually tested it”
- “Tests after achieve the same purpose”
- “It’s about spirit not ritual”
- “Keep as reference” or “adapt existing code”
- “Already spent X hours, deleting is wasteful”
- “TDD is dogmatic, I’m being pragmatic”
- “This is different because…”
Verification checklist
Before marking work complete, every box must be checked:- Every new function or method has a test
- Watched each test fail before implementing
- Each test failed for the expected reason (feature missing, not a typo)
- Wrote minimal code to pass each test
- All tests pass
- Output is pristine (no errors, no warnings)
- Tests use real code (mocks only if truly unavoidable)
- Edge cases and error paths are covered
Testing anti-patterns
When adding mocks or test utilities, watch for these common violations:Anti-pattern 1: Testing mock behavior
Anti-pattern 2: Test-only methods in production classes
Anti-pattern 3: Mocking without understanding
Anti-pattern 4: Incomplete mocks
Anti-pattern 5: Tests as afterthought
Quick reference
| Anti-pattern | Fix |
|---|---|
| Assert on mock elements | Test real component or unmock it |
| Test-only methods in production | Move to test utilities |
| Mock without understanding | Understand dependencies first, mock minimally |
| Incomplete mocks | Mirror real API completely |
| Tests as afterthought | TDD — tests first |
| Over-complex mocks | Consider integration tests with real components |