Overview
Code review in Superpowers has two distinct sides: requesting review (dispatching a reviewer subagent with precisely crafted context) and receiving review (evaluating feedback technically, not performing agreement). Both sides share a core principle: review early, review often. Catching issues after each task is cheaper than catching them at merge time.Requesting code review
When to request
Mandatory:- After each task in subagent-driven development (spec compliance + code quality)
- After completing a major feature
- Before merging to main
- When stuck (fresh perspective)
- Before refactoring (baseline check)
- After fixing a complex bug
Pre-review checklist
Before dispatching a reviewer, verify:- Implementation is complete (not work in progress)
- All tests pass
- Self-review done — no obvious gaps
- Git SHAs are ready (base SHA and head SHA)
- The spec or requirements are available to give the reviewer
How to request
1. Get the git SHAs:superpowers:code-reviewer type. Provide:
- What was implemented
- What the spec or plan required
- The base SHA and head SHA
- A brief description
- Fix Critical issues immediately
- Fix Important issues before proceeding
- Note Minor issues for later
- Push back with technical reasoning if the reviewer is wrong
Example
The two-stage review in subagent-driven development
In subagent-driven development, every implemented task goes through two sequential reviews before it’s marked complete. The order matters: spec compliance first, code quality second. Do not start code quality review while spec compliance has open issues.Stage 1: Spec compliance review
The spec compliance reviewer answers one question: did the implementation build exactly what was asked — nothing more, nothing less? What the reviewer checks:- Every requirement in the task spec is implemented
- Nothing extra was added that wasn’t requested
- Edge cases explicitly required are handled
- Interface matches what the spec described
- ✅ Spec compliant — proceed to code quality review
- ❌ Issues found — implementer fixes → spec reviewer re-reviews → repeat until approved
Stage 2: Code quality review
The code quality reviewer answers: is the implementation well-built? What the reviewer checks:- Naming: are names clear and descriptive?
- Structure: is the code organized sensibly?
- Test coverage: are behaviors covered, edge cases handled?
- Patterns: does it follow the project’s conventions?
- No obvious code smells (magic numbers, deeply nested logic, duplicated logic)
- ✅ Approved — mark task complete
- ❌ Issues found — implementer fixes → code reviewer re-reviews → repeat until approved
Review loop
Reviewer finds issues → implementer (same subagent) fixes → reviewer reviews again → repeat until approved → move on. Never skip the re-review. “Close enough” is not approved.Receiving code review
The response pattern
Handling unclear feedback
If any item is unclear, stop — do not implement anything yet. Ask for clarification on the unclear items. Items may be related; partial understanding leads to wrong implementation.From your human partner
- Trusted — implement after understanding
- Still ask if scope is unclear
- Skip to action or a brief technical acknowledgment — no performative agreement
From external reviewers
Before implementing any suggestion from an external reviewer, check:- Is it technically correct for THIS codebase?
- Does it break existing functionality?
- Is there a reason for the current implementation?
- Does it work on all platforms and versions?
- Does the reviewer understand the full context?
YAGNI check for “professional” features
If a reviewer suggests implementing a feature “properly” (database, date filters, export formats, etc.):Acknowledging correct feedback
When to push back
Push back when:- Suggestion breaks existing functionality
- Reviewer lacks full context
- Violates YAGNI (unused feature)
- Technically incorrect for this stack
- Legacy or compatibility reasons exist
- Conflicts with your human partner’s architectural decisions
If you pushed back and were wrong
Implementation order for multi-item feedback
- Clarify anything unclear first
- Implement in this order:
- Blocking issues (breaks, security)
- Simple fixes (typos, imports)
- Complex fixes (refactoring, logic)
- Test each fix individually
- Verify no regressions
Common mistakes
| Mistake | Fix |
|---|---|
| Performative agreement | State the requirement or just act |
| Blind implementation | Verify against the codebase first |
| Batch without testing | One item at a time, test each |
| Assuming reviewer is right | Check if it breaks things |
| Avoiding pushback | Technical correctness > comfort |
| Partial implementation | Clarify all items first |
| Can’t verify, proceed anyway | State the limitation, ask for direction |
Red flags
Never:- Skip review because “it’s simple”
- Ignore Critical issues
- Proceed with unfixed Important issues
- Argue with valid technical feedback without reasoning
- Say “You’re absolutely right!” — it’s performative, not technical