Skip to main content

Overview

Replay Learnings surfaces relevant past lessons before you start working on a task. It searches your correction history, recalls past mistakes, and applies prior patterns — like muscle memory for your coding sessions.
Think before you act. Surface what you’ve learned before making the same mistake twice.

Trigger

Use when:
  • Starting a new task
  • Saying “what do I know about”, “before I start”, “remind me about”
  • After switching contexts or taking a break
  • Working on an area with high past correction rate
/replay what do I know about auth
/replay before I start the API refactor
/pro-workflow:replay authentication patterns

Workflow

1

Extract Keywords

Extract keywords from the task descriptionExample:
  • “auth refactor” → auth, middleware, refactor
  • “add webhook retry” → webhook, retry, async
2

Search Learnings

Search memory for matching patterns
grep -i "auth\|middleware" .claude/LEARNED.md 2>/dev/null
grep -i "auth\|middleware" .claude/learning-log.md 2>/dev/null
grep -A2 "\[LEARN\]" CLAUDE.md | grep -i "auth\|middleware"
3

Check Session History

Look for similar work in past sessions
  • What was the correction rate?
  • How many edits vs corrections?
  • Any recurring patterns?
4

Rank by Relevance

Surface the top learnings ranked by:
  • Keyword match strength
  • Application frequency
  • Recency (newer = more relevant)
  • Correction impact (high correction rate = important)
5

Provide Briefing

Output a structured briefing with:
  • Top 3-5 relevant learnings
  • Session history for similar work
  • Suggested approach based on patterns
If no learnings match, say so explicitly rather than forcing irrelevant results. Suggest using the scout agent to explore first.

Output Format

REPLAY BRIEFING: <task>
=======================

Past learnings (ranked by relevance):
  1. [Testing] Always mock external APIs in auth tests (applied 8x)
     Mistake: Called live API in tests, caused flaky failures
     
  2. [Navigation] Auth middleware is in src/middleware/ not src/auth/ (applied 5x)
  
  3. [Quality] Add error boundary around auth state changes (applied 3x)

Session history for similar work:
  - 2026-02-01: auth refactor — 23 edits, 2 corrections (8.7% rate) ✓
  - 2026-01-28: auth middleware — 15 edits, 4 corrections (26.7% rate) ⚠️
    ^ Higher correction rate — review patterns before starting

Suggested approach:
  - Mock external APIs (learning #1)
  - Check src/middleware/ first for auth code (learning #2)
  - Plan error handling strategy upfront (learning #3)

Examples

Example 1: Auth Refactor

User: "Before I start refactoring auth, what do I know about it?"

Claude: Searching learnings for 'auth', 'refactor'...

REPLAY BRIEFING: auth refactor
=======================

Past learnings (3 found):
  1. [Testing] Mock auth provider in tests, don't call live OAuth (applied 8x)
     Mistake: Tests were flaky due to network dependency
     Correction: Added jest.mock('@/lib/auth-provider')
     
  2. [Navigation] Auth middleware is in src/middleware/auth.ts (applied 5x)
     Mistake: Searched in src/auth/ first, wasted time
     
  3. [Architecture] Keep session logic in middleware, not components (applied 3x)
     Mistake: Put session validation in React components
     Correction: Moved to middleware, components just read state

Session history:
  - 2026-02-15: OAuth integration — 18 edits, 1 correction (5.6% rate) ✓
  - 2026-01-28: Auth middleware refactor — 15 edits, 4 corrections (26.7% rate) ⚠️
  
  ⚠️ High correction rate on Jan 28. Review what went wrong:
     - Forgot to mock auth provider (learning #1)
     - Looked in wrong directory first (learning #2)
     - Put logic in components instead of middleware (learning #3)

Suggested approach:
  1. Start in src/middleware/auth.ts (learning #2)
  2. Set up test mocks first (learning #1)
  3. Keep session logic in middleware (learning #3)
  4. Review Jan 28 session notes for additional context

Ready to proceed with this context?

Example 2: No Learnings Found

User: "What do I know about GraphQL subscriptions?"

Claude: Searching learnings for 'graphql', 'subscription'...

REPLAY BRIEFING: GraphQL subscriptions
=======================

No past learnings found for this topic.

Session history:
  - No previous work on GraphQL subscriptions

This is new territory. Suggested approach:
  1. Use scout agent to explore codebase for existing GraphQL patterns
  2. Check documentation for subscription setup
  3. Start with a spike/prototype approach
  4. Capture learnings as you go with /learn-rule

Should I launch a scout agent to explore first?

Example 3: Database Migrations

User: "Replay learnings about database migrations"

Claude: Searching learnings for 'database', 'migration'...

REPLAY BRIEFING: database migrations
=======================

Past learnings (4 found):
  1. [Quality] Always test migrations on staging data first (applied 12x)
     Mistake: Ran migration on prod, corrupted data
     Correction: Added staging migration step to runbook
     
  2. [Git] Create migration + rollback in same commit (applied 7x)
     Mistake: Pushed migration without rollback, stuck when it failed
     Correction: Always write `up` and `down` migrations together
     
  3. [Testing] Test migration rollback, not just forward (applied 5x)
     Mistake: Rollback had SQL error, couldn't revert
     
  4. [Architecture] Never modify old migrations, create new ones (applied 4x)
     Mistake: Edited old migration, broke other branches

Session history:
  - 2026-02-28: Add user_role column — 12 edits, 0 corrections (0% rate) ✓
  - 2026-02-10: Add indexes to orders — 8 edits, 1 correction (12.5% rate) ✓
  - 2026-01-15: Schema refactor — 25 edits, 5 corrections (20% rate) ⚠️

✓ Recent migrations went well. Pattern is working.

Suggested approach:
  1. Write both up and down migrations (learning #2)
  2. Test rollback in local environment (learning #3)
  3. Run on staging first (learning #1)
  4. Don't modify existing migrations (learning #4)
  5. Follow the pattern from Feb 28 session (zero corrections)

Confident to proceed with migrations?

Example 4: High Correction Area

User: "What have I learned about React performance optimization?"

Claude: Searching learnings for 'react', 'performance', 'optimization'...

REPLAY BRIEFING: React performance optimization
=======================

Past learnings (5 found):
  1. [Performance] Use React.memo for expensive pure components (applied 6x)
     Mistake: Component re-rendered on every parent update
     
  2. [Performance] Memoize expensive computations with useMemo (applied 5x)
     Mistake: Recalculating on every render, caused lag
     
  3. [Editing] Don't premature optimize, profile first (applied 4x)
     Mistake: Added useMemo everywhere, made code harder to read
     Correction: Use React DevTools Profiler to find actual bottlenecks
     
  4. [Performance] Debounce input handlers, don't throttle (applied 3x)
     Mistake: Used throttle for search, felt laggy
     
  5. [Testing] Test performance with realistic data size (applied 2x)
     Mistake: Tested with 10 items, prod has 10,000

Session history:
  - 2026-03-01: Optimize table rendering 18 edits, 5 corrections (27.8% rate) ⚠️
  - 2026-02-20: Fix slow search 12 edits, 3 corrections (25% rate) ⚠️
  - 2026-02-05: Dashboard performance 20 edits, 6 corrections (30% rate) ⚠️

⚠️ HIGH CORRECTION AREA (avg 27.6% correction rate)

Common mistakes in past sessions:
  - Optimizing without profiling first (learning #3)
  - Testing with small datasets (learning #5)
  - Over-using memoization (learning #3)

Suggested approach:
  1. ⚠️ PROFILE FIRST with React DevTools (learning #3)
  2. Test with realistic data (learning #5)
  3. Start with the highest impact optimizations
  4. Apply memo/useMemo only where profiler shows benefit
  5. Capture new learnings this area needs more patterns

Proceed with extra caution in this area?

Guardrails

Rank by Relevance

Not by recency. A 6-month-old learning about auth is more relevant than yesterday’s CSS fix.

Include Mistake Context

Show the original mistake so the learning is actionable, not just “do X”.

Flag High Correction Areas

If past sessions in this area had high correction rates, warn the user to proceed carefully.

No Results is OK

If no learnings match, say so. Don’t force irrelevant results. Suggest exploration instead.

Integration with Pro Workflow

Learn Rule

Replay surfaces rules captured by learn-rule

Orchestrate

Multi-phase development starts with replay in Research phase

Insights

Session history and correction rates come from insights data

Session Handoff

Resume command can trigger replay for context loading

Configuration

Auto-Replay on Session Start

Add to CLAUDE.md:
## Session Start

Before starting any task >5 files:
1. Run replay-learnings with task keywords
2. Review top 3 learnings
3. Acknowledge high correction areas
4. Proceed with context

Custom Relevance Scoring

In settings (Claude Code):
{
  "learnings": {
    "replay": {
      "rankBy": ["relevance", "frequency", "recency"],
      "maxResults": 5,
      "includeMistakes": true
    }
  }
}

Best Practices

Make it a habit: replay before starting any task in an area you’ve worked on before.
If correction rate was >20% in past sessions, review those patterns carefully before starting.
If replay surfaces a stale or incorrect learning, update or remove it immediately.
Combine with orchestrate skill: replay in Research phase, apply in Implementation phase.

Advanced: Searchable Learnings

With Pro Workflow plugin (SQLite storage):
# Search by keyword
/search testing
/search "auth middleware"

# List by category
/list --category Testing

# Show application frequency
/insights

# Replay specific category
/replay --category Performance
Output:
REPLAY BRIEFING: Performance category
=======================

All Performance learnings (8 total):
  1. Use React.memo for expensive pure components (applied 6x)
  2. Memoize expensive computations with useMemo (applied 5x)
  3. Use indexes for frequently queried fields (applied 4x)
  ...

Troubleshooting

No Learnings Returned

Check:
  1. Are learnings stored in correct location (LEARNED.md or database)?
  2. Try broader keywords: “auth” instead of “OAuth middleware”
  3. Check if learnings exist: /list

Irrelevant Results

Refine search:
# Too broad
/replay about testing

# More specific
/replay about testing auth endpoints

Stale Learnings Surfaced

Remove outdated learnings:
# View all learnings
/list

# Remove stale ones from LEARNED.md or database
# Or mark as archived

Next Steps

Master Learn Rule

Capture corrections to build your learning database

View Insights

Analyze learning patterns and correction trends

Try Orchestrate

Use replay in Research phase of multi-phase development

Explore Pro Workflow

See the complete workflow system

Build docs developers (and LLMs) love