Skip to main content
The sdd-apply sub-agent (v2.0) implements tasks by writing actual code. It follows specs and design strictly, and supports both standard and TDD workflows.

Metadata

name
string
sdd-apply
version
string
2.0
author
string
gentleman-programming
license
string
MIT

When It’s Triggered

The orchestrator launches sdd-apply when:
  • User runs /sdd-apply or /sdd-continue
  • User wants to implement specific tasks from a change
  • Tasks are ready and user wants code written

What It Receives

From the orchestrator:
  • Change name
  • Specific task(s) to implement (e.g., “Phase 1, tasks 1.1-1.3”)
  • Artifact store mode (engram | openspec | none)

What It Does

Step 1: Read Context

Before writing ANY code:
  1. Read the specs — understand WHAT the code must do
  2. Read the design — understand HOW to structure the code
  3. Read existing code in affected files — understand current patterns
  4. Check the project’s coding conventions from config.yaml

Step 2: Detect Implementation Mode

Determines if the project uses TDD:
Detect TDD mode from (in priority order):
├── openspec/config.yaml → rules.apply.tdd (true/false — highest priority)
├── User's installed skills (e.g., tdd/SKILL.md exists)
├── Existing test patterns in the codebase (test files alongside source)
└── Default: standard mode (write code first, then verify)

IF TDD mode is detected → use TDD Workflow
IF standard mode → use Standard Workflow

TDD Workflow (RED → GREEN → REFACTOR)

When TDD is active, EVERY task follows this cycle:
FOR EACH TASK:
├── 1. UNDERSTAND
│   ├── Read the task description
│   ├── Read relevant spec scenarios (these are your acceptance criteria)
│   ├── Read the design decisions (these constrain your approach)
│   └── Read existing code and test patterns

├── 2. RED — Write a failing test FIRST
│   ├── Write test(s) that describe the expected behavior from the spec scenarios
│   ├── Run tests — confirm they FAIL (this proves the test is meaningful)
│   └── If test passes immediately → the behavior already exists or the test is wrong

├── 3. GREEN — Write the minimum code to pass
│   ├── Implement ONLY what's needed to make the failing test(s) pass
│   ├── Run tests — confirm they PASS
│   └── Do NOT add extra functionality beyond what the test requires

├── 4. REFACTOR — Clean up without changing behavior
│   ├── Improve code structure, naming, duplication
│   ├── Run tests again — confirm they STILL PASS
│   └── Match project conventions and patterns

├── 5. Mark task as complete [x] in tasks.md
└── 6. Note any issues or deviations

Standard Workflow

When TDD is not active:
FOR EACH TASK:
├── Read the task description
├── Read relevant spec scenarios (these are your acceptance criteria)
├── Read the design decisions (these constrain your approach)
├── Read existing code patterns (match the project's style)
├── Write the code
├── Mark task as complete [x] in tasks.md
└── Note any issues or deviations

Step 3: Detect Test Runner

For TDD mode, detects the test runner:
Detect test runner from:
├── openspec/config.yaml → rules.apply.test_command (highest priority)
├── package.json → scripts.test
├── pyproject.toml / pytest.ini → pytest
├── Makefile → make test
└── Fallback: report that tests couldn't be run automatically

Step 4: Mark Tasks Complete

Updates tasks.md — changes - [ ] to - [x] for completed tasks:
## Phase 1: Foundation

- [x] 1.1 Create `internal/auth/middleware.go` with JWT validation
- [x] 1.2 Add `AuthConfig` struct to `internal/config/config.go`
- [ ] 1.3 Add auth routes to `internal/server/server.go`  ← still pending

Step 5: Return Summary

Returns a result envelope with implementation progress.

Result Envelope Example (TDD Mode)

## Implementation Progress

**Change**: add-dark-mode
**Mode**: TDD

### Completed Tasks
- [x] 1.1 Convert all color values in `src/styles/theme.css` to CSS variables
- [x] 1.2 Define dark theme color palette in `src/styles/theme.css`

### Files Changed
| File | Action | What Was Done |
|------|--------|---------------|
| `src/styles/theme.css` | Modified | Converted 30 hardcoded colors to CSS variables, added dark palette |

### Tests (TDD mode)
| Task | Test File | RED (fail) | GREEN (pass) | REFACTOR |
|------|-----------|------------|--------------|----------|
| 1.1 | `src/styles/__tests__/theme.test.ts` | ✅ Failed as expected | ✅ Passed | ✅ Clean |

### Deviations from Design
None — implementation matches design.

### Issues Found
None.

### Remaining Tasks
- [ ] 1.3 Create `src/contexts/ThemeContext.tsx`
- [ ] 1.4 Create `src/hooks/useTheme.ts`

### Status
2/19 tasks complete. Ready for next batch.

Result Envelope Example (Standard Mode)

## Implementation Progress

**Change**: add-dark-mode
**Mode**: Standard

### Completed Tasks
- [x] 1.1 Convert all color values in `src/styles/theme.css` to CSS variables
- [x] 1.2 Define dark theme color palette in `src/styles/theme.css`

### Files Changed
| File | Action | What Was Done |
|------|--------|---------------|
| `src/styles/theme.css` | Modified | Converted 30 hardcoded colors to CSS variables, added dark palette |

### Deviations from Design
None — implementation matches design.

### Issues Found
None.

### Remaining Tasks
- [ ] 1.3 Create `src/contexts/ThemeContext.tsx`
- [ ] 1.4 Create `src/hooks/useTheme.ts`

### Status
2/19 tasks complete. Ready for next batch.

Rules

  • ALWAYS read specs before implementing — specs are your acceptance criteria
  • ALWAYS follow the design decisions — don’t freelance a different approach
  • ALWAYS match existing code patterns and conventions in the project
  • In openspec mode, mark tasks complete in tasks.md AS you go, not at the end
  • If you discover the design is wrong or incomplete, NOTE IT in your return summary — don’t silently deviate
  • If a task is blocked by something unexpected, STOP and report back
  • NEVER implement tasks that weren’t assigned to you
  • Load and follow any relevant coding skills for the project stack (e.g., react-19, typescript, django-drf, tdd, pytest, vitest) if available in the user’s skill set
  • Apply any rules.apply from openspec/config.yaml
  • If TDD mode is detected, ALWAYS follow the RED → GREEN → REFACTOR cycle — never skip RED (writing the failing test first)
  • When running tests during TDD, run ONLY the relevant test file/suite, not the entire test suite (for speed)
  • Return a structured envelope with: status, executive_summary, detailed_report, artifacts, next_recommended, and risks

Version 2.0 Features

v2.0 introduced:

TDD Support

  • Automatic TDD mode detection
  • RED → GREEN → REFACTOR workflow
  • Test runner detection and execution
  • Verification that tests fail before passing

Test Execution

  • Runs tests during implementation (not just verification)
  • Reports test results in the progress envelope
  • Detects test commands from config or project files

Skill Integration

  • Loads user’s installed coding skills (e.g., tdd/SKILL.md, pytest/SKILL.md, vitest/SKILL.md)
  • Follows skill-specific patterns for writing tests
  • Adapts to project-specific conventions

Enhanced Reporting

  • TDD cycle status table (RED/GREEN/REFACTOR)
  • Test file references
  • Deviations from design explicitly called out

Build docs developers (and LLMs) love