When to Activate
- Setting up eval-driven development (EDD) for AI-assisted workflows
- Defining pass/fail criteria for Claude Code task completion
- Measuring agent reliability with pass@k metrics
- Creating regression test suites for prompt or agent changes
- Benchmarking agent performance across model versions
Philosophy
Eval-Driven Development treats evals as the “unit tests of AI development”:- Define expected behavior BEFORE implementation
- Run evals continuously during development
- Track regressions with each change
- Use pass@k metrics for reliability measurement
Eval Types
Capability Evals
Test if Claude can do something it couldn’t before:Regression Evals
Ensure changes don’t break existing functionality:Grader Types
1. Code-Based Grader
Deterministic checks using code:2. Model-Based Grader
Use Claude to evaluate open-ended outputs:3. Human Grader
Flag for manual review:Metrics
pass@k
“At least one success in k attempts”- pass@1: First attempt success rate
- pass@3: Success within 3 attempts
- Typical target: pass@3 > 90%
pass^k
“All k trials succeed”- Higher bar for reliability
- pass^3: 3 consecutive successes
- Use for critical paths
Eval Workflow
## EVAL DEFINITION: feature-xyz
### Capability Evals
1. Can create new user account
2. Can validate email format
3. Can hash password securely
### Regression Evals
1. Existing login still works
2. Session management unchanged
3. Logout flow intact
### Success Metrics
- pass@3 > 90% for capability evals
- pass^3 = 100% for regression evals
# Run capability evals
[Run each capability eval, record PASS/FAIL]
# Run regression evals
npm test -- --testPathPattern="existing"
# Generate report
EVAL REPORT: feature-xyz
========================
Capability Evals:
create-user: PASS (pass@1)
validate-email: PASS (pass@2)
hash-password: PASS (pass@1)
Overall: 3/3 passed
Regression Evals:
login-flow: PASS
session-mgmt: PASS
logout-flow: PASS
Overall: 3/3 passed
Metrics:
pass@1: 67% (2/3)
pass@3: 100% (3/3)
Status: READY FOR REVIEW
Best Practices
- Define evals BEFORE coding - Forces clear thinking about success criteria
- Run evals frequently - Catch regressions early
- Track pass@k over time - Monitor reliability trends
- Use code graders when possible - Deterministic > probabilistic
- Human review for security - Never fully automate security checks
- Keep evals fast - Slow evals don’t get run
- Version evals with code - Evals are first-class artifacts