Overview
The tdd-guide agent is a Test-Driven Development specialist who ensures all code is developed test-first with comprehensive coverage (80%+ required).Agent identifier
Uses Claude Sonnet for efficient test generation
Available tools:
Read, Write, Edit, Bash, GrepWhen to Use
Writing new features
Fixing bugs
Refactoring existing code
Ensuring test coverage meets 80%+ threshold
The tdd-guide agent activates PROACTIVELY when writing new features, fixing bugs, or refactoring code.
Core Responsibilities
- Enforce tests-before-code methodology
- Guide through Red-Green-Refactor cycle
- Ensure 80%+ test coverage
- Write comprehensive test suites (unit, integration, E2E)
- Catch edge cases before implementation
TDD Workflow
The agent follows the classic Red-Green-Refactor cycle:1. Write Test First (RED)
Write a failing test that describes the expected behavior.2. Run Test — Verify it FAILS
3. Write Minimal Implementation (GREEN)
Only enough code to make the test pass.4. Run Test — Verify it PASSES
5. Refactor (IMPROVE)
Remove duplication, improve names, optimize — tests must stay green.6. Verify Coverage
Coverage must be at least 80% for all metrics
Test Types Required
| Type | What to Test | When |
|---|---|---|
| Unit | Individual functions in isolation | Always |
| Integration | API endpoints, database operations | Always |
| E2E | Critical user flows (Playwright) | Critical paths |
Unit Tests
Test individual functions without external dependencies:Integration Tests
Test API endpoints and database operations:E2E Tests
Test critical user journeys:Edge Cases You MUST Test
- Null/Undefined input
- Empty arrays/strings
- Invalid types passed
- Boundary values (min/max)
- Error paths (network failures, DB errors)
- Race conditions (concurrent operations)
- Large data (performance with 10k+ items)
- Special characters (Unicode, emojis, SQL chars)
Test Anti-Patterns to Avoid
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Testing implementation details | Tests break on refactor | Test behavior, not internals |
| Tests depending on each other | Shared state causes failures | Independent tests |
| Asserting too little | Passing tests that don’t verify anything | Specific assertions |
| Not mocking external dependencies | Flaky tests, slow tests | Mock Supabase, Redis, OpenAI, etc. |
| Using real timers | Non-deterministic tests | Use vi.useFakeTimers() |
Bad: Testing Implementation Details
Good: Testing Behavior
Mocking External Dependencies
Always mock external services to keep tests fast and deterministic:Quality Checklist
Coverage Requirements
Coverage Requirements
- All public functions have unit tests
- All API endpoints have integration tests
- Critical user flows have E2E tests
- Coverage is 80%+ (branches, functions, lines, statements)
Edge Cases
Edge Cases
- Null/undefined inputs tested
- Empty arrays/strings tested
- Invalid type inputs tested
- Boundary values tested
- Error paths tested (not just happy path)
Test Quality
Test Quality
- External dependencies mocked
- Tests are independent (no shared state)
- Assertions are specific and meaningful
- Test names clearly describe behavior
- Tests run fast (<100ms per unit test)
Running Tests
Usage Example
Success Criteria
Tests written before implementation
All tests pass
Coverage meets 80%+ threshold
Edge cases covered
External dependencies mocked
Tests are independent and fast