Skip to main content

Overview

Test plans in the SDD framework bridge the gap between specifications (what to build) and test tasks (what to verify). They define test strategy, coverage targets, test matrices, and performance scenarios. Generated by the test-planner skill and stored in test/.

Location

test/
├── TEST-PLAN.md              # Master test strategy
├── TEST-MATRIX-UC-001.md     # Test matrix for UC-001
├── TEST-MATRIX-UC-002.md     # Test matrix for UC-002
└── PERF-SCENARIOS.md         # Performance test scenarios

Test plan structure

TEST-PLAN.md

The master test strategy document.
# Test Plan

> **Project:** My Project
> **Version:** 1.0
> **Generated from:** spec/ (audit-clean)
> **SWEBOK alignment:** Ch04 — Software Testing

## Test strategy summary

| Metric | Target | Current |
|--------|--------|----------|
| BDD scenario coverage (UCs) | 100% of main + exception flows | 85% |
| Invariant test coverage | 100% of INV-* | 100% |
| Contract test coverage | 100% of API endpoints | 90% |
| NFR test coverage | 100% of measurable NFRs | 75% |
| Security test coverage | 100% of OWASP Top 10 applicable | 80% |

## Test levels

### Unit tests
- **Scope:** Entity invariants, value object validation, pure business logic
- **Technique:** Property-based testing for invariants, example-based for logic
- **Framework:** vitest
- **Coverage target:** 90% line coverage on domain layer

### Integration tests
- **Scope:** UC flows via API endpoints, event handling, database operations
- **Technique:** BDD scenarios (Given/When/Then), contract testing
- **Data:** Test fixtures derived from spec entity schemas
- **Coverage target:** 100% of UC main flows, 80% of exception flows

### End-to-end tests
- **Scope:** Multi-UC workflows, cross-service flows
- **Technique:** Scenario-based testing following WF-* specs
- **Environment:** Staging environment with test data
- **Coverage target:** 100% of WF-* workflows

### Performance tests
- **Scope:** Response time (p99), throughput, concurrent users
- **Technique:** Load testing, stress testing, soak testing
- **Targets:** From spec/nfr/PERFORMANCE.md
- **Schedule:** Run on every FASE completion

### Security tests
- **Scope:** Authentication bypass, authorization escalation, injection, data exposure
- **Technique:** OWASP ASVS v4 checklist + automated scanning
- **Targets:** From spec/nfr/SECURITY.md + security audit findings

## Test gaps identified

| Gap ID | Type | Spec Element | Missing Test | Priority |
|--------|------|-------------|--------------|----------|
| GAP-001 | MISSING-BDD | UC-003 | No BDD file exists | High |
| GAP-002 | INCOMPLETE-BDD | UC-005 | Exception flow 2 not covered | Medium |
| GAP-003 | MISSING-PROPERTY-TEST | INV-EXT-005 | No property test defined | Medium |
| GAP-004 | MISSING-NFR-TEST | PERFORMANCE p99 target | No load test scenario | High |

## Per-FASE test targets

| FASE | Unit Tests | Integration Tests | E2E Tests | Perf Tests |
|------|-----------|-------------------|-----------|------------|
| FASE-0 | INV-SYS-* | Auth flows | Health check | Baseline |
| FASE-1 | INV-EXT-* | UC-001, UC-002 | WF-001 | Load targets |
| FASE-2 | INV-CVA-* | UC-003, UC-004 | WF-002 | Stress |

## Regression strategy

- **On every commit:** Unit tests + affected integration tests
- **On FASE completion:** Full integration + E2E suite
- **On release candidate:** Full suite + performance + security

Test matrix structure

Test matrices apply systematic test design techniques to complex use cases.

TEST-MATRIX-UC-.md

# Test Matrix: UC-001 — Upload PDF for Analysis

## Inputs

| Input | Type | Valid Partitions | Invalid Partitions | Boundaries |
|-------|------|------------------|--------------------|------------|
| file | binary | Valid PDF, 1B-50MB | Non-PDF, >50MB, empty | 0B, 1B, 50MB, 50MB+1B |
| contentType | string | "application/pdf" | Other MIME types | N/A |
| authToken | string | Valid JWT | Expired, malformed, missing | N/A |

## Decision table

| # | Valid PDF | Size ≤ 50MB | Valid Auth | Expected Action | Expected Status |
|---|-----------|-------------|------------|-----------------|------------------|
| T1 | true | true | true | Create CVAnalysis | 201 Created |
| T2 | true | false | true | Reject | 413 Payload Too Large |
| T3 | false | true | true | Reject | 400 Bad Request |
| T4 | true | true | false | Reject | 401 Unauthorized |
| T5 | false | false | true | Reject | 400 Bad Request |

## Boundary value tests

| Test ID | File Size | Expected Result |
|---------|-----------|------------------|
| BV1 | 0 bytes | 400 Bad Request (empty file) |
| BV2 | 1 byte | 400 Bad Request (invalid PDF) |
| BV3 | 50MB - 1 byte | 201 Created |
| BV4 | 50MB exactly | 201 Created |
| BV5 | 50MB + 1 byte | 413 Payload Too Large |

## State transition tests

Not applicable (UC-001 has no state transitions)

## Traceability

| Test Case | Covers | Spec Ref |
|-----------|--------|----------|
| T1 | Main flow | UC-001 §normal.1-9 |
| T2 | File size limit | UC-001 §AF1, INV-EXT-005 |
| T3 | Invalid PDF | UC-001 §AF2 |
| T4 | Auth required | UC-001 §preconditions.1 |

Performance scenarios structure

PERF-SCENARIOS.md

Defines load, stress, soak, and spike test scenarios derived from NFRs.
# Performance Test Scenarios

> Derived from: spec/nfr/PERFORMANCE.md, spec/nfr/LIMITS.md

## Targets (from specs)

| Metric | Target | Source |
|--------|--------|--------|
| API response time (p99) | < 200ms | PERFORMANCE.md |
| Upload time (50MB, p99) | < 5s | PERFORMANCE.md |
| Throughput | 10 req/s | PERFORMANCE.md |
| Concurrent users | 100 | PERFORMANCE.md |
| Rate limit (per user) | 10 uploads/day | LIMITS.md |

## Scenarios

### PERF-001: API baseline load test
- **Type:** Load
- **Target endpoint:** POST /api/v1/pdfs (API-pdf-upload)
- **Concurrent users:** 50
- **Duration:** 10 minutes
- **Ramp-up:** Linear over 2 minutes
- **Success criteria:**
  - p99 response time < 200ms
  - 0% error rate
  - All uploads succeed

### PERF-002: Rate limit enforcement
- **Type:** Stress
- **Target:** Rate limit threshold (10 uploads/day per user)
- **Method:** Single user submits 15 uploads in 1 minute
- **Success criteria:**
  - First 10 uploads: 201 Created
  - Uploads 11-15: 429 Too Many Requests
  - Response includes Retry-After header
  - Rate limit resets after 24 hours

### PERF-003: Large file upload
- **Type:** Load
- **Target:** 50MB file upload time
- **Concurrent users:** 10
- **Duration:** 5 minutes
- **Success criteria:**
  - p99 upload time < 5s
  - No timeouts
  - Files stored correctly

### PERF-004: Sustained load (soak test)
- **Type:** Soak
- **Target:** Detect memory leaks
- **Concurrent users:** 25
- **Duration:** 1 hour
- **Request rate:** 5 req/min per user
- **Success criteria:**
  - Worker memory stable (no growth)
  - p99 response time consistent
  - No errors

### PERF-005: Traffic spike
- **Type:** Spike
- **Target:** Sudden burst from 10 to 100 users
- **Duration:** 5 minutes
- **Pattern:** 10 users → instant jump to 100 → back to 10
- **Success criteria:**
  - System handles spike without crashes
  - Error rate < 5% during spike
  - p99 response time < 500ms during spike
  - Recovery to normal performance within 30s

Test design techniques

Test plans apply SWEBOK v4 Chapter 04 test design techniques:
Equivalence partitioning
Divide input domain into valid and invalid partitions. Select one representative value per partition.
Boundary value analysis
For numeric/range inputs, test boundary values:
  • min-1, min, min+1
  • max-1, max, max+1
Decision table
For UCs with multiple conditions, build condition/action table. Each row = one test case.
State transition
For entities with state machines, test:
  • Every valid transition
  • Every invalid transition (must be rejected)
Property-based testing
For invariants, generate random inputs and verify the property always holds.

Coverage metrics

Test plans define coverage targets per dimension:
DimensionFormulaTarget
UC CoverageUCs with BDD / total UCs100%
Exception CoverageException flows tested / total exception flows≥ 80%
Invariant CoverageINVs with property tests / total INVs100%
Contract CoverageEndpoints with contract tests / total endpoints100%
NFR CoverageMeasurable NFRs with test scenarios / total measurable NFRs100%

Traceability

Every test must trace to a spec element:
REQ-EXT-001 (requirement)
  ↓ implements
UC-001 (use case)
  ↓ verifies
BDD-001 (acceptance test)
  ↓ guides
TEST-MATRIX-UC-001 (test design)
  ↓ generates
TASK-F1-003 (test task)
  ↓ produces
tests/integration/pdf-upload.test.ts (test code)
No test exists without a spec justification. No spec element exists without a test.

Tools

Generation

/sdd:test-planner
Modes:
  1. Generate test strategy (default) - Full TEST-PLAN.md
  2. Generate test matrices - Detailed input/output matrices for UCs
  3. Generate performance scenarios - Derived from NFR specs
  4. Audit test coverage - Verify completeness of existing test specs

Usage

# Generate full test plan
/sdd:test-planner

# Generate test matrix for specific UC
/sdd:test-planner --matrix UC-001

# Generate performance scenarios only
/sdd:test-planner --perf-only

# Audit existing test coverage
/sdd:test-planner --audit
  • Skills: /sdd:test-planner, /sdd:task-generator, /sdd:task-implementer
  • Upstream: Specifications in spec/
  • Downstream: Test tasks in task/, test code in tests/
  • SWEBOK: Chapter 04 (Software Testing)
  • References: test-planner SKILL.md

Build docs developers (and LLMs) love