What it does
Auto-detects formats
Automatically identifies the format of input documentation or accepts explicit format specification.
Parses and maps
Parses external documentation into a normalized representation and maps fields to SDD artifact fields.
Previews before generating
Shows mapping preview for user confirmation before generating any artifacts.
Quality checks
Verifies completeness, consistency, and traceability potential of generated artifacts.
The import skill works from exported files only. It does not access external services (Jira API, Notion API) directly. For real-time Notion sync, use
/sdd:sync-notion.Supported formats
- Jira
- OpenAPI/Swagger
- Markdown
- Notion
- CSV
- Excel
File extensions:
.json, .csv (Jira export)What it maps to:- Epics → requirement groups
- Stories → use cases
- Bugs → defect tracking
- Tasks → implementation notes
projects or issues array, CSV with Jira column headers (Summary, Issue Type, Status, Priority)Invocation modes
| Mode | Behavior | Use Case |
|---|---|---|
| default | Auto-detect format, generate both requirements and specs | General import |
--format=TYPE | Skip auto-detection, use specified format | When auto-detection is ambiguous |
--target=requirements | Generate only requirements/ artifacts | When source is requirements-focused |
--target=specs | Generate only spec/ artifacts | When source is spec-focused (e.g., OpenAPI) |
--target=both | Generate both (default) | Full import |
--merge | Merge with existing SDD artifacts instead of creating new | Adding to existing SDD project |
Seven phases
The import skill executes seven phases to transform external docs into SDD artifacts:Phase 1: Format detection
Identifies the format of input files:Check file extension and content
Examines file extension and inspects content for format-specific markers.
Resolve ambiguities
If JSON but unclear source, checks for Jira fields vs OpenAPI fields. If CSV but unclear, checks column headers.
Phase 2: Parse input
Parses files into a normalized intermediate representation: Normalized intermediate format:- Encoding issues (UTF-8, Latin-1, etc.)
- Date format variations
- Empty or null fields
- Malformed entries (log and skip with reason)
Phase 3: Mapping preview
Maps intermediate items to SDD artifact fields and presents for confirmation:--merge):
- Compares imported items against existing SDD artifacts
- Matches by: ID similarity, title similarity, description overlap
- Flags duplicates for user review
Phase 4: User confirmation
Gets user approval before generating artifacts:Confirm mapping
Confirm mapping
“Proceed with these mappings?”
Handle duplicates (if --merge)
Handle duplicates (if --merge)
“These 2 items match existing artifacts. Options: Skip / Merge / Replace”
Resolve ambiguities
Resolve ambiguities
Items that couldn’t be auto-mapped get presented with options:
- “This item could be a requirement OR a use case. Which?”
- “This description doesn’t fit EARS syntax. Import as-is or convert?”
Confirm skipped items
Confirm skipped items
“These 3 items were skipped because . Include anyway?”
Phase 5: Generate SDD artifacts
Generates SDD-format artifacts from confirmed mappings:- Requirements
- Domain Model
- Use Cases
- API Contracts
- NFRs
- ADRs
File:
requirements/REQUIREMENTS.md--merge):
- New items: Append to existing files with
[IMPORTED]marker - Duplicates (merge): Update existing entry with imported data, mark
[MERGED] - Duplicates (skip): Leave existing entry unchanged
- Duplicates (replace): Overwrite existing with imported, mark
[IMPORTED-REPLACED]
Phase 6: Quality check
Verifies quality of generated artifacts:Completeness check
- All imported items have SDD IDs
- All requirements have EARS syntax (or
[UNCONVERTED]tag) - All use cases have actors, preconditions, postconditions
- All API contracts have request/response schemas
Consistency check
- No duplicate IDs
- All cross-references resolve (REQ→UC links)
- Priority distribution is reasonable (not all CRITICAL)
- Group structure is coherent
Traceability readiness
- Requirements can link to use cases
- Use cases can link to API contracts
- Identifies gaps in the chain
Quality metrics
- Items imported: N/total
- EARS conversion rate: X%
- Traceability ready: X%
- Quality issues: N
- Manual review needed: N items
Phase 7: Pipeline state update
Updates pipeline state to reflect imported artifacts:- If
pipeline-state.jsondoes not exist → creates with imported stages markeddone - If it exists:
- If
requirements-engineerwaspendingand requirements were imported → set todone - If
specifications-engineerwaspendingand specs were imported → set todone - Marks downstream stages as needing run
- If
- Generates import report:
import/IMPORT-REPORT.md - Records import metadata for future reference (source files, mapping rules used)
Output format
The import skill generates an import report atimport/IMPORT-REPORT.md:
Key sections:
- Source files and formats
- Import statistics (parsed, mapped, skipped, errors)
- Mapping summary (original → SDD)
- Quality assessment
- Items needing manual review
- Pipeline state impact
Format-specific tips
Jira exports
Jira exports
Best practices:
- Export as JSON for highest fidelity (preserves all fields and relationships)
- Include custom fields relevant to requirements (acceptance criteria, business value)
- Export from JQL query to filter only relevant issues
- Epic → Requirement group
- Story → Use case
- Task → Implementation note or sub-requirement
- Bug → Defect finding (can be imported for tracking)
OpenAPI/Swagger
OpenAPI/Swagger
Best practices:
- Use OpenAPI 3.x for best results (richer schema support)
- Include descriptions for all paths and schemas
- Use
summaryanddescriptionfields extensively - Define security schemes explicitly
- Paths → API contracts (highest fidelity)
- Schemas → Domain model entities
- Security schemes → Security NFRs
- Path descriptions → Functional requirements
Markdown files
Markdown files
Best practices:
- Use consistent heading structure (H2 for groups, H3 for requirements)
- Use bullet lists for individual requirements
- Include acceptance criteria as nested lists
- Use code blocks for technical specifications
- H1 → Document title (ignored)
- H2 → Requirement group
- H3 → Individual requirement (converted to EARS)
- Lists → Sub-requirements or acceptance criteria
Notion exports
Notion exports
Best practices:
- Export as Markdown with subpages for hierarchical structure
- Export databases as CSV for tabular requirements
- Use consistent property names (Title, Description, Priority, Status)
- Include Relations for traceability
- Database rows → Requirements or use cases
- Page content → Specification documents
- Properties → Requirement attributes (priority, status, assignee)
CSV files
CSV files
Best practices:
- Include headers in first row
- Use standard column names: ID, Title, Description, Priority, Status, Type
- Use consistent values (e.g., “High” not “high” or “HIGH”)
- Escape commas and quotes in description fields
- ID column → Preserved as source reference
- Title/Summary column → Requirement title
- Description column → Converted to EARS statement
- Type column → Determines artifact type (requirement, use case, NFR)
Excel spreadsheets
Excel spreadsheets
Best practices:
- Use one sheet per artifact type (Requirements, Use Cases, NFRs)
- Name sheets consistently
- Use first row for headers
- Use data validation for priority and status columns
- Sheet name → Artifact type
- Rows → Individual items
- Columns → Requirement fields
Relationship to other skills
Onboarding skill
Onboarding recommends import when external docs are detected (scenarios 5, 8).
Reverse engineer skill
Import may run before reverse-engineer to pre-populate requirements from docs.
Reconcile skill
Run after import + reverse-engineer to verify alignment between imported docs and code.
Spec auditor
Run after import to audit imported specs for defects and inconsistencies.
Constraints
- File-based only: Works from exported files, NOT direct API access to Jira/Notion/etc.
- No overwrites: Never overwrites existing SDD artifacts without explicit user confirmation via
--merge. - Preview first: Always shows mapping preview before generating artifacts.
- EARS conversion: Attempts to convert all requirements to EARS syntax. Tags as
[UNCONVERTED]if automatic conversion fails. - Source tracing: Every imported item must reference its source (file, line/row, original ID).
- Error tolerance: Parse errors on individual items do NOT abort the entire import. Logs errors, skips items, continues.
- Encoding safe: Handles UTF-8, Latin-1, and common encodings gracefully.
- No secrets: Skips/redacts any fields that appear to contain secrets (API keys, tokens, passwords).