Skip to main content
The /speckit.clarify command identifies underspecified areas in your feature specification and asks up to 5 highly targeted questions to eliminate ambiguity.

Purpose

Reduce implementation risk by:
  • Detecting missing decision points in requirements
  • Asking strategic questions that impact architecture or UX
  • Recording answers directly in the specification
  • Ensuring the spec is implementation-ready
Clarification should run after /speckit.specify and before /speckit.plan. It’s optional for well-understood features but critical for complex or novel requirements.

Usage

# Clarify the current feature specification
/speckit.clarify

# Clarify with context hints
/speckit.clarify Focus on security and data privacy requirements

# Clarify specific domain
/speckit.clarify I'm particularly concerned about edge cases and error handling

How It Works

1

Load Prerequisites

Runs prerequisite check script to get:
  • FEATURE_DIR: Current feature directory path
  • FEATURE_SPEC: Path to spec.md file
Aborts if spec.md doesn’t exist (must run /speckit.specify first)
2

Scan for Ambiguities

Performs structured coverage analysis across taxonomy:
  • Functional scope & behavior
  • Domain & data model
  • Interaction & UX flow
  • Non-functional attributes (performance, security, reliability)
  • Integration & dependencies
  • Edge cases & failure handling
  • Constraints & tradeoffs
  • Terminology consistency
For each category: marks as Clear, Partial, or Missing
3

Prioritize Questions

Generates up to 5 questions based on:
  • Impact: Will the answer change architecture, data modeling, or UX?
  • Uncertainty: Are there multiple reasonable interpretations?
  • Risk: Does ambiguity create downstream rework risk?
Uses heuristic: (Impact × Uncertainty) to rank candidates
4

Ask Sequentially

Presents one question at a time:
  • Multiple choice with recommended option (based on best practices)
  • Or short answer with suggested response
  • User accepts recommendation or provides alternative
  • After each answer, immediately updates spec.md
5

Integrate Incrementally

After each accepted answer:
  • Creates ## Clarifications section (if missing)
  • Adds ### Session YYYY-MM-DD subsection
  • Records - Q: <question> → A: <answer>
  • Updates appropriate spec sections
  • Saves file immediately (atomic updates)
6

Report Coverage

After all questions (or early termination):
  • Number of questions asked & answered
  • Sections touched
  • Coverage summary by taxonomy category
  • Recommendation: proceed to /speckit.plan or clarify again

Question Format

Multiple Choice Questions

For decisions with discrete options:
## Question 1: Data Storage Approach

**Recommended:** Option B - PostgreSQL with normalized schema

Based on the requirement for complex queries and data integrity, a relational database with ACID guarantees is the best choice. PostgreSQL offers excellent performance for the expected scale (< 100k users) while maintaining data consistency.

| Option | Description |
|--------|-------------|
| A | Document store (MongoDB) - Flexible schema, simpler queries |
| B | PostgreSQL with normalized schema - Strong consistency, complex queries |
| C | SQLite embedded - Simplest deployment, single-user focused |
| Short | Provide a different short answer (≤5 words) |

You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.

Short Answer Questions

For open-ended decisions:
## Question 2: Performance Target

**Suggested:** API response time < 200ms (p95)

For a real-time dashboard, sub-200ms response time ensures smooth UX without perceived lag. This aligns with industry standards for interactive web applications.

Format: Short answer (≤5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.

Ambiguity Taxonomy

The command scans for gaps in these categories:

Functional Scope & Behavior

  • Core user goals clearly defined?
  • Explicit out-of-scope declarations?
  • User roles/personas differentiated?

Domain & Data Model

  • Entities, attributes, relationships specified?
  • Identity & uniqueness rules clear?
  • Lifecycle/state transitions defined?
  • Data volume/scale assumptions documented?

Interaction & UX Flow

  • Critical user journeys sequenced?
  • Error/empty/loading states described?
  • Accessibility or localization needs?

Non-Functional Quality Attributes

  • Performance: Latency, throughput targets
  • Scalability: Horizontal/vertical limits
  • Reliability: Uptime, recovery expectations
  • Observability: Logging, metrics, tracing
  • Security: AuthN/Z, data protection, threats
  • Compliance: Regulatory constraints

Integration & Dependencies

  • External services/APIs and failure modes
  • Data import/export formats
  • Protocol/versioning assumptions

Edge Cases & Failure Handling

  • Negative scenarios covered?
  • Rate limiting/throttling defined?
  • Conflict resolution (concurrent edits)?

Constraints & Tradeoffs

  • Technical constraints explicit?
  • Tradeoffs or rejected alternatives documented?

Terminology & Consistency

  • Canonical glossary terms established?
  • Synonyms/deprecated terms avoided?

Real-World Example

/speckit.clarify Focus on security and data privacy

Clarifications Section

Answers are recorded in the spec with timestamps:
## Clarifications

### Session 2024-03-15

- Q: How should task data be encrypted? → A: Encrypt at rest and in transit
- Q: What is the authentication token expiration policy? → A: 1 hour, no refresh
- Q: How should the system handle concurrent task updates? → A: Last-write-wins with timestamp tracking
- Q: What performance target for task list queries? → A: < 100ms for up to 1000 tasks
- Q: Are GDPR compliance requirements needed? → A: Yes, support data export and deletion

Strategic Question Selection

The command asks about decisions that materially impact:

Architecture

  • Database choice (relational vs. document vs. cache)
  • Stateful vs. stateless design
  • Monolith vs. microservices
  • Sync vs. async processing

Data Modeling

  • Entity relationships
  • State machine transitions
  • Data retention policies
  • Partitioning strategies

UX Behavior

  • Real-time vs. polling updates
  • Optimistic vs. pessimistic locking
  • Offline support requirements
  • Error recovery flows

Operational Readiness

  • Performance SLAs
  • Security posture
  • Compliance obligations
  • Monitoring/alerting needs

Question Limits

Maximum 5 questions per session. The command prioritizes high-impact decisions and defers low-impact details to the planning phase.

What Gets Asked

  • Scope boundaries affecting architecture
  • Security/privacy decisions with legal implications
  • Performance targets driving technology choices
  • User experience tradeoffs with competing solutions

What Gets Deferred

  • Implementation details (framework selection)
  • Code organization preferences
  • Logging format specifics
  • Variable naming conventions

Validation After Clarification

Ensures quality of updates:
1

Session Completeness

Exactly one bullet per accepted answer, no duplicates
2

Question Quota

Total asked questions ≤ 5
3

No Lingering Vagueness

Updated sections contain no unresolved placeholders that answers addressed
4

No Contradictions

Earlier ambiguous statements removed or replaced
5

Structural Validity

Markdown structure intact, only allowed new headings are Clarifications sections
6

Terminology Consistency

Same canonical term used across all updated sections

Early Termination

You can stop clarification at any time:
User: "done"
User: "good"
User: "proceed"
User: "stop"
The command saves progress and reports:
  • Questions answered so far
  • Coverage status
  • Whether to proceed to planning or clarify again later

No Ambiguities Detected

If the spec is already clear:
✓ Loaded feature specification: specs/003-task-api/spec.md
✓ Performed coverage analysis

Coverage Summary:
  ✓ Clear: All categories

No critical ambiguities detected worth formal clarification.

The specification is well-defined and ready for planning.

Next step: /speckit.plan I am building with Python and FastAPI

Best Practices

Run Before Planning

Clarification is expected before /speckit.plan. If skipped, downstream rework risk increases significantly.

Provide Context Hints

Help prioritize questions:
# Security-focused
/speckit.clarify Focus on security and compliance

# Performance-focused
/speckit.clarify Concerned about scalability and response times

# UX-focused
/speckit.clarify Want to nail down the user experience details

Accept Recommendations

The AI suggests options based on:
  • Industry best practices
  • Project type patterns
  • Risk reduction strategies
  • Alignment with existing spec context
Accept unless you have specific constraints.

Answer All Questions

Each question impacts implementation:
  • Skipping questions leaves ambiguity
  • Ambiguity leads to rework
  • Answer thoughtfully but don’t overthink

Handoffs

After clarification:

Build Technical Plan

Generate architecture with all ambiguities resolved

Run Another Session

If deferred items remain and you want to address them now

File Updates

specs/003-task-api/
└── spec.md                    # Updated with clarifications
No new files created - all updates go into existing spec.md.

Next Steps

Plan

Create technical architecture and design decisions

Build docs developers (and LLMs) love