Skip to main content
“80% of my code is written by AI, 20% is spent reviewing and correcting it.” — Andrej Karpathy
The self-correction loop optimizes for this ratio. Every correction becomes a rule that prevents future mistakes. After 50 sessions, Claude barely needs correcting.

How It Works

1

You correct Claude

Point out a mistake or suboptimal behavior
2

Claude acknowledges specifically

Explains exactly what went wrong
3

Claude proposes a rule

Formats as [LEARN] Category: One-line rule
4

You approve

Say “yes”, “approved”, or “add it”
5

Rule goes into LEARNED section

Persists across all future sessions

Example Flow

User: You edited the wrong file

Claude: I edited src/utils.ts when you meant src/lib/utils.ts.

[LEARN] Navigation: Confirm full path before editing files with common names.

Should I add this?

User: yes

Claude: Added to LEARNED section. I'll confirm paths before editing from now on.

Setup

Add this to your CLAUDE.md:
## Self-Correction Protocol

When the user corrects me or I make a mistake:
1. Acknowledge specifically what went wrong
2. Propose a concise rule: `[LEARN] Category: One-line rule`
3. Wait for approval before adding to LEARNED section

### LEARNED
<!-- Auto-populated through corrections -->
The LEARNED section starts empty and grows as you work together.

Trigger Phrases

These phrases activate the learning protocol:
  • “Add that to your rules”
  • “Remember this”
  • “Don’t do that again”
  • “Learn from this”
  • “Make this a rule”

Learning Categories

Organize rules by category for easier maintenance:

Rule Format

Good rules are:
Specific: “Confirm full path before editing files with common names”
Actionable: “Run typecheck after interface changes”
Concise: One line, clear directive
Categorized: Grouped for easy scanning

Correction Workflow

When you notice a mistake:
User: You forgot to run the tests

Claude: You're right. I made changes without verifying them.

[LEARN] Testing: Run relevant tests after each file change

Should I add this to LEARNED?

Compound Improvement

The magic happens over time: Each correction prevents similar future mistakes. After 50 sessions, corrections drop by 80%.

Database Integration

Pro Workflow stores learnings in SQLite with full-text search:
# Search all learnings
/search testing

# Search exact phrase
/search "file paths"

# List all learnings
/list

# Surface relevant learnings
/replay <topic>
The /replay command automatically surfaces past learnings relevant to your current task.

Hook Integration

Hooks automatically prompt for learnings: SessionEnd Hook
{
  "hooks": {
    "SessionEnd": "Review session. Any corrections to capture as [LEARN] rules?"
  }
}
PostToolUse Hook (after test failures)
{
  "hooks": {
    "PostToolUse": "If tests failed, suggest [LEARN] pattern to prevent similar failures"
  }
}

Learning from Test Failures

When tests fail:
Test failed: Expected camelCase, got snake_case

Claude: The test failed because I used snake_case instead of camelCase.

[LEARN] Code Quality: Project uses camelCase for all identifiers

Should I add this?
Don’t add learnings for one-off issues. Only capture patterns that will recur.

Pruning Stale Rules

Review your LEARNED section weekly: Remove if:
  • The pattern changed (no longer relevant)
  • Too specific to one file
  • Redundant with another rule
  • Never actually applied
Keep if:
  • Applied frequently
  • Prevents common mistakes
  • Represents project conventions
  • Clarifies ambiguous situations

Split Memory Pattern

For large projects, split learnings into categories:
.claude/
├── CLAUDE.md          # Entry point
├── LEARNED.md         # General learnings
├── AGENTS.md          # Workflow learnings
└── SOUL.md            # Style preferences
CLAUDE.md imports them:
# MyProject

## Learnings
!cat .claude/LEARNED.md

## Agent Rules
!cat .claude/AGENTS.md

## Style Guide
!cat .claude/SOUL.md

Analytics and Insights

Track correction patterns:
/insights
Shows:
  • Correction frequency over time
  • Hot categories (most corrections)
  • Cold rules (learned but never applied)
  • Improvement trajectory (fewer corrections over time)

Examples from Production Use

[LEARN] Testing: Run tests after each file change, not in batch at end
[LEARN] Testing: Test error cases explicitly, not just happy paths
[LEARN] Testing: Mock external dependencies (APIs, databases) in unit tests
[LEARN] Testing: Check test coverage increased, not decreased
[LEARN] Git: Run lint + typecheck + test before commit
[LEARN] Git: Use conventional commit format: type(scope): message
[LEARN] Git: Check for uncommitted changes before branch operations
[LEARN] Git: Never commit console.log, debugger, or TODO comments
[LEARN] Quality: Read existing code before writing new patterns
[LEARN] Quality: Prefer existing patterns over new abstractions
[LEARN] Quality: Remove unused imports after refactoring
[LEARN] Quality: Update tests when interface changes

Best Practices

Do

Capture corrections immediately while fresh
Use specific, actionable language
Organize by category for easy scanning
Review and prune weekly
Use /replay to surface relevant learnings

Don’t

Integration with Workflows

The self-correction loop integrates with: Multi-Phase Development
  • Capture learnings at the end of each feature
  • Review corrections during wrap-up
Wrap-Up Ritual
  • /wrap-up prompts for learnings
  • Surface corrections from the session
Agent Teams
  • Each agent can propose learnings
  • Lead consolidates into LEARNED section

Next Steps

80/20 Review

Learn when to review and when to trust

Multi-Phase Development

Apply learnings in structured workflows

Build docs developers (and LLMs) love