Skip to main content
The /speckit.tasks command transforms your technical plan into a complete, executable task breakdown organized by user stories with clear dependencies and parallelization opportunities.

Purpose

Create implementation-ready task lists:
  • Organize tasks by user story (independently testable increments)
  • Define clear dependencies and execution order
  • Identify parallel execution opportunities
  • Map tasks to specific file paths
  • Enable incremental delivery (MVP first, then enhancements)
  • Support Test-Driven Development (optional)
Tasks are organized by user story (not by layer or component) to enable independent implementation and testing of each story as a complete feature slice.

Usage

# Generate tasks from the current plan
/speckit.tasks

# Generate tasks with context hints
/speckit.tasks Focus on getting to MVP quickly

# Generate tasks with TDD emphasis
/speckit.tasks We want comprehensive test coverage

How It Works

1

Setup

Runs prerequisite check and parses:
  • FEATURE_DIR: Current feature directory
  • AVAILABLE_DOCS: List of available design documents
  • Validates plan.md and spec.md exist
2

Load Design Documents

Reads available artifacts:
  • Required: plan.md (tech stack, structure), spec.md (user stories with priorities)
  • Optional: data-model.md (entities), contracts/ (interfaces), research.md (decisions), quickstart.md (scenarios)
Note: Not all projects have all documents
3

Extract Information

Analyzes design documents:
  • User stories from spec.md with priorities (P1, P2, P3)
  • Tech stack and project structure from plan.md
  • Entities from data-model.md (if exists)
  • Interface contracts from contracts/ (if exists)
  • Technical decisions from research.md (if exists)
4

Generate Task Phases

Creates task breakdown:
  • Phase 1: Setup (project initialization, structure)
  • Phase 2: Foundational (blocking prerequisites for ALL user stories)
  • Phase 3+: One phase per user story (in priority order)
  • Final Phase: Polish & cross-cutting concerns
Each user story phase includes:
  • Story goal and independent test criteria
  • Tests (if requested) written FIRST
  • Implementation tasks (models → services → interfaces)
5

Define Dependencies

Documents execution order:
  • Phase-level dependencies (Setup → Foundational → Stories → Polish)
  • Within-story dependencies (Tests → Models → Services → Endpoints)
  • Cross-story dependencies (rare - stories should be independent)
6

Identify Parallelization

Marks tasks with [P] that can run concurrently:
  • Different files
  • No dependencies on incomplete tasks
  • Same layer across different user stories
7

Format Validation

Ensures ALL tasks follow checklist format:
  • - [ ] checkbox
  • [TaskID] sequential (T001, T002, etc.)
  • [P] marker if parallelizable
  • [Story] label for user story phases (US1, US2, US3)
  • Description with exact file path
8

Generate tasks.md

Writes tasks file using template structure with:
  • All phases properly organized
  • Dependency graph
  • Parallel execution examples
  • Implementation strategy (MVP first)
9

Report

Outputs:
  • Total task count
  • Task count per user story
  • Parallel opportunities
  • Independent test criteria per story
  • Suggested MVP scope
  • Format validation confirmation

Task Format

Every task MUST follow this strict format:
- [ ] [TaskID] [P?] [Story?] Description with file path

Format Components

1

Checkbox

Always start with - [ ] (markdown checkbox)
2

Task ID

Sequential number (T001, T002, T003) in execution order
3

[P] Marker (Optional)

Include ONLY if task is parallelizable:
  • Different files than other tasks
  • No dependencies on incomplete tasks
4

[Story] Label (Conditional)

Required for user story phases only:
  • Format: [US1], [US2], [US3]
  • Maps to user stories from spec.md
  • Setup/Foundational/Polish: NO story label
5

Description

Clear action with exact file path

Examples

- [ ] T001 Create project structure per implementation plan
- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py
- [ ] T012 [P] [US1] Create User model in src/models/user.py
- [ ] T014 [US1] Implement UserService in src/services/user_service.py

Task Organization

Phase 1: Setup

Project initialization (no story labels):
## Phase 1: Setup (Shared Infrastructure)

**Purpose**: Project initialization and basic structure

- [ ] T001 Create project structure per implementation plan
- [ ] T002 Initialize Python project with FastAPI dependencies
- [ ] T003 [P] Configure linting (flake8, black) and formatting tools
- [ ] T004 [P] Setup pre-commit hooks for code quality
- [ ] T005 Configure environment variables and .env.example

Phase 2: Foundational

Blocking prerequisites that ALL stories need (no story labels):
## Phase 2: Foundational (Blocking Prerequisites)

**Purpose**: Core infrastructure that MUST be complete before ANY user story

**⚠️ CRITICAL**: No user story work can begin until this phase is complete

- [ ] T006 Setup database schema and Alembic migrations framework
- [ ] T007 [P] Create base model class with common fields (id, created_at, updated_at)
- [ ] T008 [P] Implement database connection pooling in src/db/connection.py
- [ ] T009 [P] Setup FastAPI app structure with middleware in src/main.py
- [ ] T010 Create error handling middleware in src/middleware/errors.py
- [ ] T011 [P] Configure structured logging in src/utils/logger.py

**Checkpoint**: Foundation ready - user story implementation can now begin in parallel

Phase 3+: User Stories

One phase per story, ordered by priority:
## Phase 3: User Story 1 - Basic Login (Priority: P1) 🎯 MVP

**Goal**: Users can log in with email and password to access protected features

**Independent Test**: Create account, log in, access protected page. Delivers immediate value by protecting sensitive features.

### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️

> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**

- [ ] T012 [P] [US1] Contract test for POST /api/v1/auth/login in tests/contract/test_auth_api.py
- [ ] T013 [P] [US1] Contract test for POST /api/v1/auth/logout in tests/contract/test_auth_api.py
- [ ] T014 [P] [US1] Integration test for complete login flow in tests/integration/test_login_flow.py

### Implementation for User Story 1

- [ ] T015 [P] [US1] Create User model with password hashing in src/models/user.py
- [ ] T016 [P] [US1] Create Session model in src/models/session.py
- [ ] T017 [US1] Implement AuthService with login/logout logic in src/services/auth_service.py
- [ ] T018 [US1] Implement session authentication middleware in src/middleware/session_auth.py
- [ ] T019 [US1] Implement POST /api/v1/auth/login endpoint in src/api/auth.py
- [ ] T020 [US1] Implement POST /api/v1/auth/logout endpoint in src/api/auth.py
- [ ] T021 [US1] Add account locking after 5 failed attempts to AuthService
- [ ] T022 [US1] Add logging for authentication events

**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently

Final Phase: Polish

Cross-cutting improvements (no story labels):
## Phase 6: Polish & Cross-Cutting Concerns

**Purpose**: Improvements that affect multiple user stories

- [ ] T042 [P] Update API documentation with OpenAPI/Swagger specs
- [ ] T043 [P] Add comprehensive docstrings to all public functions
- [ ] T044 Code cleanup and refactoring for consistency
- [ ] T045 Performance optimization (database query optimization, caching)
- [ ] T046 [P] Security hardening (rate limiting, CORS configuration)
- [ ] T047 [P] Add unit tests for edge cases in services (if not already covered)
- [ ] T048 Run quickstart.md validation to ensure setup instructions work
- [ ] T049 Update README with authentication feature documentation

Dependencies & Execution Order

Phase Dependencies

Setup (Phase 1)

Foundational (Phase 2)  ← BLOCKS all user stories

    ├─────────────────────→ User Story 1 (P1) 🎯 MVP
    ├─────────────────────→ User Story 2 (P2)
    └─────────────────────→ User Story 3 (P3)

Polish (Final Phase)  ← Depends on desired stories completing

User Story Independence

  • User Story 1: Independent after Foundational
  • User Story 2: Independent after Foundational (may integrate with US1)
  • User Story 3: Independent after Foundational (may integrate with US1/US2)
User stories are designed to be independently testable and deployable. You can ship US1 as MVP without waiting for US2 or US3.

Within Each User Story

If tests included:
  1. Tests FIRST (must FAIL before implementation)
  2. Models (can run in parallel if marked [P])
  3. Services (depend on models)
  4. Endpoints/Interfaces (depend on services)
  5. Integration logic

Parallel Opportunities

Tasks marked [P] can run concurrently:
# Launch all models for User Story 1 together:
Task: "Create User model in src/models/user.py"
Task: "Create Session model in src/models/session.py"

# Launch tests for User Story 1 together:
Task: "Contract test for login endpoint"
Task: "Contract test for logout endpoint" 
Task: "Integration test for login flow"
Different user stories can be worked on in parallel by different team members once the Foundational phase completes.

Implementation Strategies

Deliver value incrementally:
1

Phase 1: Setup

Initialize project structure
2

Phase 2: Foundational

Complete blocking prerequisites
3

Phase 3: User Story 1 Only

Implement just the P1 story
4

STOP and VALIDATE

Test US1 independently, deploy/demo if ready
5

Add User Story 2

Test independently, deploy/demo
6

Add User Story 3

Test independently, deploy/demo

Parallel Team Strategy

With multiple developers:
  1. Team completes Setup + Foundational together
  2. Once Foundational done:
    • Developer A: User Story 1 (P1)
    • Developer B: User Story 2 (P2)
    • Developer C: User Story 3 (P3)
  3. Stories complete and integrate independently

Real-World Example

/speckit.tasks

Tests: Optional by Default

Test tasks are OPTIONAL and only generated if:
  • Explicitly requested in the feature specification, OR
  • User requests TDD approach via command arguments
If no tests requested, task breakdown focuses purely on implementation.

Best Practices

Trust the Task Breakdown

The AI:
  • Maps entities to the stories that need them
  • Orders tasks by dependency
  • Identifies parallelization opportunities
  • Ensures each story is independently testable

Follow TDD If Tests Included

When tests are requested:
  1. Write test (it should FAIL)
  2. Write implementation (make test PASS)
  3. Refactor (keep test passing)

Commit Frequently

After each task or logical group:
git add .
git commit -m "T015: Create User model with password hashing"

Stop at Checkpoints

After each user story phase:
  • Run all tests for that story
  • Validate independently
  • Consider deploying/demoing
  • Decide: proceed to next story or polish current one?

Handoffs

After task generation:

Analyze For Consistency

Validate consistency across spec, plan, and tasks (optional)

Implement Project

Execute the implementation plan (starts task execution)

File Structure

specs/003-user-auth/
├── spec.md                    # Feature specification
├── plan.md                    # Implementation plan
├── research.md                # Technical decisions
├── data-model.md              # Entity definitions
├── quickstart.md              # Development guide
├── tasks.md                   # Task breakdown (this command)
└── contracts/
    ├── auth.md
    └── oauth.md

Next Steps

Implement

Execute the task plan and build the feature

Build docs developers (and LLMs) love