Skip to main content
The writing-plans skill takes an approved design document and produces a detailed implementation plan. The plan is written assuming the engineer who will execute it has zero context for the codebase and questionable taste. Every task must contain everything they need: which files to touch, exact code, how to run tests, what output to expect, and what to commit. When this skill starts, the agent announces: “I’m using the writing-plans skill to create the implementation plan.”
This skill runs in the dedicated worktree created after design approval. Never plan or implement on main or master.

Plan document header

Every plan must start with this exact header:
# [Feature Name] Implementation Plan

> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.

**Goal:** [One sentence describing what this builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** [Key technologies/libraries]

---

File structure mapping

Before defining any tasks, the plan maps out every file that will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
  • Each file has one clear responsibility
  • Files that change together live together
  • Split by responsibility, not by technical layer
  • In existing codebases, follow established patterns
This structure directly informs the task decomposition. Each task should produce self-contained changes that make sense independently.

Bite-sized task granularity

Every step in a task is one atomic action taking 2–5 minutes. The TDD cycle is made explicit at step level:
  • “Write the failing test” — one step
  • “Run it to make sure it fails” — one step
  • “Implement the minimal code to make the test pass” — one step
  • “Run the tests and make sure they pass” — one step
  • “Commit” — one step

Task structure

Every task follows this template exactly:
### Task N: [Component Name]

**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`

- [ ] **Step 1: Write the failing test**

```python
def test_specific_behavior():
    result = function(input)
    assert result == expected
```

- [ ] **Step 2: Run test to verify it fails**

Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"

- [ ] **Step 3: Write minimal implementation**

```python
def function(input):
    return expected
```

- [ ] **Step 4: Run test to verify it passes**

Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS

- [ ] **Step 5: Commit**

```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```

Realistic example task

Here is what a real plan task looks like — a hook installation script:
### Task 1: Hook installation script

**Files:**
- Create: `src/install.ts`
- Create: `tests/install.test.ts`

- [ ] **Step 1: Write the failing test**

```typescript
import { installHook } from '../src/install';
import { existsSync } from 'fs';
import { homedir } from 'os';
import path from 'path';

test('installs hook to user config directory', async () => {
  const dest = path.join(homedir(), '.config/superpowers/hooks/post-commit');
  await installHook({ force: false });
  expect(existsSync(dest)).toBe(true);
});
```

- [ ] **Step 2: Run test to verify it fails**

Run: `npx jest tests/install.test.ts -t "installs hook" --no-coverage`
Expected: FAIL with "Cannot find module '../src/install'"

- [ ] **Step 3: Write minimal implementation**

```typescript
import { mkdirSync, copyFileSync } from 'fs';
import { homedir } from 'os';
import path from 'path';

export async function installHook({ force }: { force: boolean }) {
  const dest = path.join(homedir(), '.config/superpowers/hooks');
  mkdirSync(dest, { recursive: true });
  copyFileSync(
    path.join(__dirname, '../hooks/post-commit'),
    path.join(dest, 'post-commit')
  );
}
```

- [ ] **Step 4: Run test to verify it passes**

Run: `npx jest tests/install.test.ts -t "installs hook" --no-coverage`
Expected: PASS

- [ ] **Step 5: Commit**

```bash
git add src/install.ts tests/install.test.ts
git commit -m "feat: add hook installation script"
```

No placeholders

Every step must contain the actual content an engineer needs. These are plan failures — never write them:
  • TBD, TODO, implement later, fill in details
  • “Add appropriate error handling” / “add validation” / “handle edge cases”
  • “Write tests for the above” (without actual test code)
  • “Similar to Task N” (repeat the code — the engineer may read tasks out of order)
  • Steps that describe what to do without showing how
  • References to types, functions, or methods not defined in any task

Self-review checklist

After writing the complete plan, review it before offering execution:
  1. Spec coverage: Skim each requirement in the spec. Point to the task that implements it. Add a task for any gap.
  2. Placeholder scan: Search for TBD, TODO, “similar to”, “add error handling”, or any pattern from the no-placeholders list. Fix them.
  3. Type consistency: Do type names, method signatures, and property names used in later tasks match what is defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.
Fix issues inline. If a spec requirement has no task, add the task.

Plan file location

docs/superpowers/plans/YYYY-MM-DD-<feature-name>.md
The user’s preferences for plan location override this default if specified in CLAUDE.md or AGENTS.md.

Execution handoff

After saving the plan, the agent offers a choice:
“Plan complete and saved to docs/superpowers/plans/<filename>.md. Two execution options: 1. Subagent-Driven (recommended) — I dispatch a fresh subagent per task, review between tasks, fast iteration 2. Inline Execution — Execute tasks in this session using executing-plans, batch execution with checkpoints Which approach?”

Subagent-Driven Development

Recommended. Fresh subagent per task, two-stage review, no human-in-loop between tasks.

Inline Execution

Same session, batch execution, human checkpoints between tasks.

Build docs developers (and LLMs) love