/ce:plan, /ce:work, /ce:review, and /ce:compound.
The Four-Phase Cycle
After Compound, you return to Plan for the next feature - creating a continuous improvement loop.
Phase 1: Plan
Command:/ce:plan [feature description]
Purpose: Transform feature descriptions into well-structured plans following project conventions.
Time Investment: 30-80% of feature time (seems high, but prevents building wrong thing)
Optional: Brainstorm First
For complex or unclear features, start with brainstorming:- Asks clarifying questions one at a time
- Explores 2-3 concrete approaches with pros/cons
- Captures key decisions in
docs/brainstorms/ - Auto-links to
/ce:planwhen ready
- Requirements are unclear or open-ended
- Multiple approaches exist and you need to choose
- Stakeholders need to align on direction
- New territory without established patterns
- Requirements are clear and specific
- Established patterns exist for this feature type
- Scope is well-defined and constrained
The Planning Process
Idea Refinement
If no brainstorm exists,
/ce:plan asks clarifying questions:- What’s the purpose and who are the users?
- What are the constraints and success criteria?
- Are there specific requirements or edge cases?
docs/brainstorms/, it’s automatically used.Local Research (Always Runs)
Parallel research to understand your project:
repo-research-analyst- Existing patterns, CLAUDE.md guidance, technology familiaritylearnings-researcher- Documented solutions indocs/solutions/that might apply
Research Decision
Based on signals from refinement and local research, decides whether external research is valuable:Always research: Security, payments, external APIs (cost of missing something is too high)Skip research: Strong local context, user knows what they want, codebase has good patternsResearch: Uncertainty, unfamiliar territory, new technologyAnnounces decision: “Your codebase has solid patterns for this. Proceeding without external research.”
External Research (Conditional)
If external research is valuable, runs in parallel:
best-practices-researcher- Industry best practices and examplesframework-docs-researcher- Official documentation and patterns
SpecFlow Analysis
Runs
spec-flow-analyzer to validate the feature specification:- Identifies gaps in user flows
- Surfaces edge cases and error scenarios
- Updates acceptance criteria based on findings
Choose Detail Level
Select implementation detail level based on complexity:MINIMAL - Quick issue (2-5 minutes to write):
- Problem/feature description
- Basic acceptance criteria
- Essential context only
- Detailed background and motivation
- Technical considerations
- Success metrics, dependencies, risks
- Basic implementation suggestions
- Executive summary and detailed analysis
- Implementation phases with effort estimates
- Alternative approaches considered
- System-wide impact analysis
- Risk mitigation strategies
- Documentation plan
Write Plan File
Creates structured markdown file in Filename format:
docs/plans/:YYYY-MM-DD-<type>-<descriptive-name>-plan.mdContent includes:- YAML frontmatter (title, type, status, date, origin)
- Problem statement or feature description
- Research findings with file references
- Acceptance criteria
- Implementation details (level-dependent)
- Sources and references
After Planning
You’ll see options:- Open plan in editor - Review the generated plan
- Run
/deepen-plan- Enhance each section with parallel research agents - Review and refine - Improve through structured self-review
- Share to Proof - Collaborative review and sharing
- Start
/ce:work- Begin implementation - Create Issue - Create in GitHub/Linear
Key Planning Insights
Research pays off:- Created with
status: activein frontmatter - Checkboxes marked off as
/ce:workcompletes tasks - Updated to
status: completedwhen shipped - Searchable reference for similar future features
Phase 2: Work
Command:/ce:work [plan file path]
Purpose: Execute work plans efficiently while maintaining quality and finishing features.
Time Investment: 20% of feature time (sounds small, but clear plan makes execution fast)
The Execution Process
Quick Start
- Read Plan and Clarify - Review the plan, ask any clarifying questions now
- Setup Environment - Create feature branch or worktree
- Create Todo List - Use TodoWrite to break plan into actionable tasks
Execute Tasks
For each task in priority order:
- Mark task as
in_progressin TodoWrite - Read referenced files from the plan
- Look for similar patterns in codebase
- Implement following existing conventions
- Write tests for new functionality
- Run System-Wide Test Check (see below)
- Run tests after changes
- Mark task as
completedin TodoWrite - Mark checkbox in plan file (
[ ]→[x]) - Evaluate for incremental commit
System-Wide Test Check
Before marking a task done, ask:
When to skip: Leaf-node changes (new helper, new view partial) with no callbacks or state persistence.When critical: Changes touching models with callbacks, error handling with retry, or functionality in multiple interfaces.
| Question | What to do |
|---|---|
| What fires when this runs? | Trace callbacks, middleware, observers two levels deep |
| Do tests exercise the real chain? | Write integration test with real objects (not all mocks) |
| Can failure leave orphaned state? | Test failure path, verify cleanup or idempotent retry |
| What other interfaces expose this? | Grep for method in related classes, add parity now |
| Do error strategies align across layers? | Verify rescue list matches what lower layer raises |
Incremental Commits
After completing each logical unit, evaluate whether to commit:Commit when:
- Logical unit complete (model, service, component)
- Tests pass + meaningful progress
- About to switch contexts (backend → frontend)
- About to attempt risky changes
- Small part of larger unit
- Tests failing
- Purely scaffolding with no behavior
- Would need “WIP” commit message
Quality Check
Before creating PR:
- Run full test suite
- Run linting (use
lintagent) - Consider reviewer agents for complex changes (see settings)
- Verify all TodoWrite tasks completed
- Verify all plan checkboxes marked
Ship It
-
Create final commit with attribution:
-
Capture screenshots (for UI changes):
-
Create PR with comprehensive description:
PR includes:
- Summary of what was built and why
- Testing performed
- Post-Deploy Monitoring & Validation plan
- Before/after screenshots (for UI)
- Compound Engineered badge
-
Update plan status:
Key Work Insights
Test continuously:Phase 3: Review
Command:/ce:review [PR number or branch]
Purpose: Perform exhaustive code reviews using multi-agent analysis.
Time Investment: 15-45 minutes per PR (catches issues that would cost hours in production)
The Review Process
Determine Target & Setup
Identify what to review:
- PR number:
/ce:review 123 - GitHub URL:
/ce:review https://github.com/org/repo/pull/123 - Current branch:
/ce:review
Load Review Agents
Reads
compound-engineering.local.md in project root for configured review agents.If no settings file exists, invokes the setup skill to create one:- Auto-detects project type (Rails, Python, TypeScript)
- Offers “Auto-configure” or “Customize” paths
- Writes settings file with appropriate agents
Parallel Agent Review
Runs all configured agents in parallel:Always run:
agent-native-reviewer- Verify new features are agent-accessiblelearnings-researcher- Search docs/solutions/ for related past issues- Configured agents from settings file
schema-drift-detector- Detects unrelated schema.rb changesdata-migration-expert- Validates ID mappings match productiondeployment-verification-agent- Creates Go/No-Go checklist
Ultra-Thinking Deep Dive
Multiple perspectives on the code:Stakeholder Perspective Analysis:
- Developer: Is this easy to understand and modify?
- Operations: How do I deploy and troubleshoot this?
- End User: Is the feature intuitive?
- Security: What’s the attack surface?
- Business: What’s the ROI and compliance impact?
- Happy path, invalid inputs, boundary conditions
- Concurrent access, scale testing, network issues
- Resource exhaustion, security attacks
- Data corruption, cascading failures
Findings Synthesis
Consolidates all agent reports:
- Collect findings from all parallel agents
- Surface learnings-researcher results (known patterns)
- Categorize by type (security, performance, architecture)
- Assign severity: 🔴 P1 (CRITICAL), 🟡 P2 (IMPORTANT), 🔵 P3 (NICE-TO-HAVE)
- Remove duplicates
- Estimate effort (Small/Medium/Large)
Create Todo Files
Uses the Each todo includes:
file-todos skill to create structured todo files for ALL findings:File naming:- YAML frontmatter (status, priority, tags, dependencies)
- Problem Statement
- Findings from agents with evidence/location
- Proposed Solutions (2-3 options with pros/cons/effort/risk)
- Technical Details (affected files, components)
- Acceptance Criteria
- Work Log
- Resources (PR link, documentation, similar patterns)
Optional: End-to-End Testing
After review summary, offers browser/iOS testing: For web projects:Key Review Insights
Multi-agent parallel review is thorough:Phase 4: Compound
Command:/ce:compound [optional context]
Purpose: Document a recently solved problem to compound your team’s knowledge.
Time Investment: 5-10 minutes (saves 30 minutes next occurrence)
The Compounding Process
Parallel Research Phase
Launches 5 subagents in parallel (they return text data, don’t write files):
- Context Analyzer - Extracts conversation history, identifies problem type/component/symptoms
- Solution Extractor - Analyzes investigation steps, identifies root cause, extracts working solution
- Related Docs Finder - Searches docs/solutions/ for related documentation
- Prevention Strategist - Develops prevention strategies and test cases
- Category Classifier - Determines optimal category and filename
Assembly & Write Phase
After all subagents return results:
- Collect all text results
- Assemble complete markdown file
- Validate YAML frontmatter against schema
- Create directory if needed:
mkdir -p docs/solutions/[category]/ - Write the SINGLE final file:
docs/solutions/[category]/[filename].md
Optional Enhancement
Based on problem type, optionally invokes specialized agents to review the documentation:
performance_issue→performance-oraclesecurity_issue→security-sentineldatabase_issue→data-integrity-guardian- Code-heavy issues →
kieran-rails-reviewer+code-simplicity-reviewer
What Gets Captured
Organized documentation indocs/solutions/[category]/:
Working Solution
Preload the association:Prevention Strategies
- Add Bullet gem to development environment
- Include N+1 check in code review checklist
- Add performance test for brief generation endpoint
Related Issues
- Similar pattern in email_processor: docs/solutions/performance-issues/n-plus-one-email-processor.md
- Rails includes guide: https://guides.rubyonrails.org/active_record_querying.html#eager-loading
Why This Workflow Works
The workflow embodies the compound engineering philosophy:1. Plan Prevents Rework
2. Review Catches Issues Early
3. Documentation Compounds
4. Knowledge Persists
Workflow Tips
When to use each detail level in planning
When to use each detail level in planning
MINIMAL: Simple bugs, small improvements, clear features
- Bug: “Fix email validation regex”
- Feature: “Add sort button to table”
- Time: 2-5 minutes to write plan
- Feature: “Add OAuth authentication”
- Bug: “Fix race condition in payment processing”
- Time: 10-15 minutes to write plan
- Feature: “Implement real-time collaboration”
- Migration: “Move from monolith to microservices”
- Time: 30+ minutes to write plan
When to use reviewer agents
When to use reviewer agents
Always use (configured in settings):
- Your project’s language reviewer (kieran-rails-reviewer, etc.)
- agent-native-reviewer (for agent-accessible features)
- learnings-researcher (searches past solutions)
- security-sentinel (auth, permissions, data access)
- performance-oracle (performance-critical paths)
- architecture-strategist (large refactors)
- data-integrity-guardian (migrations, data changes)
- Simple changes: tests + linting is sufficient
- Save thorough review for complex/risky work
How to configure review agents
How to configure review agents
Run the setup skill:Choose “Auto-configure” for quick setup or “Customize” for fine control.Creates The markdown body provides context to all review agents.
compound-engineering.local.md in project root:When to brainstorm vs. plan directly
When to brainstorm vs. plan directly
Brainstorm first:
- “Should we use OAuth or build custom auth?”
- “How should we implement real-time updates?”
- “What’s the right architecture for this?”
- Multiple valid approaches exist
- “Fix bug in email validation”
- “Add sort button to user table”
- “Implement OAuth following existing pattern”
- One clear approach
How to handle P1 findings
How to handle P1 findings
P1 (CRITICAL) blocks merge. Always fix before shipping:
- Read the todo file:
001-pending-p1-vulnerability.md - Review Proposed Solutions (usually 2-3 options)
- Implement the recommended fix
- Test the fix
- Update todo Work Log
- Rename:
001-pending-p1-*.md→001-complete-p1-*.md - Push fix to PR
- Re-run
/ce:reviewto verify
Advanced Workflows
Full Autonomous Workflow
/ce:plan- Create plan/deepen-plan- Enhance with parallel research/ce:work- Implement the feature/ce:review- Multi-agent reviewresolve todos- Fix findings/test-browser- End-to-end testing/feature-video- Record demo
Swarm Mode Workflow
/lfg but uses swarm mode for maximum parallelism:
- Multiple agents work simultaneously
- Coordinate through shared task list
- Faster completion for complex features
Deepen Plan
- Best practices for each major section
- Performance optimizations
- UI/UX improvements (if applicable)
- Quality enhancements and edge cases
/ce:plan for maximum depth and grounding.
Next Steps
Try the Workflow
Follow the quickstart to run your first workflow cycle
Understand the Philosophy
Learn why each unit of work should make the next easier
View All Commands
Explore the complete ce:* command reference
Configure Review Agents
Set up review agents for your project