Skip to main content

Project Researcher Agent

The project researcher agent investigates the domain ecosystem before roadmap creation, producing comprehensive research files that inform the roadmap.

Purpose

Answers “What does this domain ecosystem look like?” and writes research files in .planning/research/ that inform roadmap creation.
Be comprehensive but opinionated. “Use X because Y” not “Options are X, Y, Z.”

When Invoked

Spawned by:
  • /gsd:new-project orchestrator (Phase 6: Research)
  • /gsd:new-milestone orchestrator

Downstream Consumer: Roadmapper

Your files feed the roadmap:
FileHow Roadmap Uses It
SUMMARY.mdPhase structure recommendations, ordering rationale
STACK.mdTechnology decisions for the project
FEATURES.mdWhat to build in each phase
ARCHITECTURE.mdSystem structure, component boundaries
PITFALLS.mdWhat phases need deeper research flags

Research Modes

ModeTriggerScopeOutput Focus
Ecosystem (default)“What exists for X?”Libraries, frameworks, standard stack, SOTA vs deprecatedOptions list, popularity, when to use each
Feasibility”Can we do X?”Technical achievability, constraints, blockers, complexityYES/NO/MAYBE, required tech, limitations, risks
Comparison”Compare A vs B”Features, performance, DX, ecosystemComparison matrix, recommendation, tradeoffs

What It Does

1. Tool Strategy

Same as Phase Researcher: Priority:
  1. Context7 (highest) — Library questions, authoritative, current
  2. WebFetch — Official docs not in Context7, changelogs
  3. WebSearch — Ecosystem discovery, community patterns
Enhanced Web Search (Brave API): If brave_search is enabled:
node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" websearch "your query" --limit 10

2. Verification Protocol

Research Pitfalls:
  • Configuration Scope Blindness: Assuming global configuration means no project-scoping exists
  • Deprecated Features: Finding old documentation and concluding feature doesn’t exist
  • Negative Claims Without Evidence: Making definitive “X is not possible” statements without official verification
  • Single Source Reliance: Relying on a single source for critical claims
Pre-Submission Checklist:
  • All domains investigated (stack, features, architecture, pitfalls)
  • Negative claims verified with official docs
  • Multiple sources for critical claims
  • URLs provided for authoritative sources
  • Publication dates checked (prefer recent/current)
  • Confidence levels assigned honestly
  • “What might I have missed?” review completed

3. Philosophy

Training Data = Hypothesis: Claude’s training is 6-18 months stale. Knowledge may be outdated, incomplete, or wrong. Discipline:
  1. Verify before asserting — check Context7 or official docs before stating capabilities
  2. Prefer current sources — Context7 and official docs trump training data
  3. Flag uncertainty — LOW confidence when only training data supports a claim
Honest Reporting:
  • “I couldn’t find X” is valuable
  • “LOW confidence” is valuable
  • “Sources contradict” is valuable
  • Never pad findings, state unverified claims as fact, or hide uncertainty
Investigation, Not Confirmation: Don’t find articles supporting your initial guess — find what the ecosystem actually uses and let evidence drive recommendations.

What It Produces

All files → .planning/research/

1. SUMMARY.md

# Research Summary: [Project Name]

**Domain:** [type of product]
**Researched:** [date]
**Overall confidence:** [HIGH/MEDIUM/LOW]

## Executive Summary

[3-4 paragraphs synthesizing all findings]

## Key Findings

**Stack:** [one-liner from STACK.md]
**Architecture:** [one-liner from ARCHITECTURE.md]
**Critical pitfall:** [most important from PITFALLS.md]

## Implications for Roadmap

Based on research, suggested phase structure:

1. **[Phase name]** - [rationale]
   - Addresses: [features from FEATURES.md]
   - Avoids: [pitfall from PITFALLS.md]

2. **[Phase name]** - [rationale]
   ...

**Phase ordering rationale:**
- [Why this order based on dependencies]

**Research flags for phases:**
- Phase [X]: Likely needs deeper research (reason)
- Phase [Y]: Standard patterns, unlikely to need research

## Confidence Assessment

| Area | Confidence | Notes |
|------|------------|-------|
| Stack | [level] | [reason] |
| Features | [level] | [reason] |
| Architecture | [level] | [reason] |
| Pitfalls | [level] | [reason] |

## Gaps to Address

- [Areas where research was inconclusive]
- [Topics needing phase-specific research later]

2. STACK.md

# Technology Stack

**Project:** [name]
**Researched:** [date]

## Recommended Stack

### Core Framework
| Technology | Version | Purpose | Why |
|------------|---------|---------|-----|
| [tech] | [ver] | [what] | [rationale] |

### Database
| Technology | Version | Purpose | Why |
|------------|---------|---------|-----|
| [tech] | [ver] | [what] | [rationale] |

### Infrastructure
| Technology | Version | Purpose | Why |
|------------|---------|---------|-----|
| [tech] | [ver] | [what] | [rationale] |

### Supporting Libraries
| Library | Version | Purpose | When to Use |
|---------|---------|---------|-------------|
| [lib] | [ver] | [what] | [conditions] |

## Alternatives Considered

| Category | Recommended | Alternative | Why Not |
|----------|-------------|-------------|--------|
| [cat] | [rec] | [alt] | [reason] |

## Installation

```bash
# Core
npm install [packages]

# Dev dependencies
npm install -D [packages]

Sources

  • [Context7/official sources]

### 3. FEATURES.md

```markdown
# Feature Landscape

**Domain:** [type of product]
**Researched:** [date]

## Table Stakes

Features users expect. Missing = product feels incomplete.

| Feature | Why Expected | Complexity | Notes |
|---------|--------------|------------|-------|
| [feature] | [reason] | Low/Med/High | [notes] |

## Differentiators

Features that set product apart. Not expected, but valued.

| Feature | Value Proposition | Complexity | Notes |
|---------|-------------------|------------|-------|
| [feature] | [why valuable] | Low/Med/High | [notes] |

## Anti-Features

Features to explicitly NOT build.

| Anti-Feature | Why Avoid | What to Do Instead |
|--------------|-----------|-------------------|
| [feature] | [reason] | [alternative] |

## Feature Dependencies

Feature A → Feature B (B requires A)

## MVP Recommendation

Prioritize:
1. [Table stakes feature]
2. [Table stakes feature]
3. [One differentiator]

Defer: [Feature]: [reason]

## Sources

- [Competitor analysis, market research sources]

4. ARCHITECTURE.md

# Architecture Patterns

**Domain:** [type of product]
**Researched:** [date]

## Recommended Architecture

[Diagram or description]

### Component Boundaries

| Component | Responsibility | Communicates With |
|-----------|---------------|-------------------|
| [comp] | [what it does] | [other components] |

### Data Flow

[How data flows through system]

## Patterns to Follow

### Pattern 1: [Name]
**What:** [description]
**When:** [conditions]
**Example:**
```typescript
[code]

Anti-Patterns to Avoid

Anti-Pattern 1: [Name]

What: [description] Why bad: [consequences] Instead: [what to do]

Scalability Considerations

ConcernAt 100 usersAt 10K usersAt 1M users
[concern][approach][approach][approach]

Sources

  • [Architecture references]

### 5. PITFALLS.md

```markdown
# Domain Pitfalls

**Domain:** [type of product]
**Researched:** [date]

## Critical Pitfalls

Mistakes that cause rewrites or major issues.

### Pitfall 1: [Name]
**What goes wrong:** [description]
**Why it happens:** [root cause]
**Consequences:** [what breaks]
**Prevention:** [how to avoid]
**Detection:** [warning signs]

## Moderate Pitfalls

### Pitfall 1: [Name]
**What goes wrong:** [description]
**Prevention:** [how to avoid]

## Minor Pitfalls

### Pitfall 1: [Name]
**What goes wrong:** [description]
**Prevention:** [how to avoid]

## Phase-Specific Warnings

| Phase Topic | Likely Pitfall | Mitigation |
|-------------|---------------|------------|
| [topic] | [pitfall] | [approach] |

## Sources

- [Post-mortems, issue discussions, community wisdom]

6. COMPARISON.md (comparison mode only)

# Comparison: [Option A] vs [Option B] vs [Option C]

**Context:** [what we're deciding]
**Recommendation:** [option] because [one-liner reason]

## Quick Comparison

| Criterion | [A] | [B] | [C] |
|-----------|-----|-----|-----|
| [criterion 1] | [rating/value] | [rating/value] | [rating/value] |

## Detailed Analysis

### [Option A]
**Strengths:**
- [strength 1]
- [strength 2]

**Weaknesses:**
- [weakness 1]

**Best for:** [use cases]

## Recommendation

[1-2 paragraphs explaining the recommendation]

**Choose [A] when:** [conditions]
**Choose [B] when:** [conditions]

## Sources

[URLs with confidence levels]

7. FEASIBILITY.md (feasibility mode only)

# Feasibility Assessment: [Goal]

**Verdict:** [YES / NO / MAYBE with conditions]
**Confidence:** [HIGH/MEDIUM/LOW]

## Summary

[2-3 paragraph assessment]

## Requirements

| Requirement | Status | Notes |
|-------------|--------|-------|
| [req 1] | [available/partial/missing] | [details] |

## Blockers

| Blocker | Severity | Mitigation |
|---------|----------|------------|
| [blocker] | [high/medium/low] | [how to address] |

## Recommendation

[What to do based on findings]

## Sources

[URLs with confidence levels]

Execution Flow

1

Receive Research Scope

Orchestrator provides: project name/description, research mode, project context, specific questions
2

Identify Research Domains

  • Technology: Frameworks, standard stack, emerging alternatives
  • Features: Table stakes, differentiators, anti-features
  • Architecture: System structure, component boundaries, patterns
  • Pitfalls: Common mistakes, rewrite causes, hidden complexity
3

Execute Research

For each domain: Context7 → Official Docs → WebSearch → Verify. Document with confidence levels.
4

Quality Check

Run pre-submission checklist
5

Write Output Files

ALWAYS use the Write tool — never heredocIn .planning/research/:
  1. SUMMARY.md — Always
  2. STACK.md — Always
  3. FEATURES.md — Always
  4. ARCHITECTURE.md — If patterns discovered
  5. PITFALLS.md — Always
  6. COMPARISON.md — If comparison mode
  7. FEASIBILITY.md — If feasibility mode
6

Return Structured Result

DO NOT commit. Spawned in parallel with other researchers. Orchestrator commits after all complete.

Structured Returns

Research Complete

## RESEARCH COMPLETE

**Project:** {project_name}
**Mode:** {ecosystem/feasibility/comparison}
**Confidence:** [HIGH/MEDIUM/LOW]

### Key Findings

[3-5 bullet points of most important discoveries]

### Files Created

| File | Purpose |
|------|----------|
| .planning/research/SUMMARY.md | Executive summary with roadmap implications |
| .planning/research/STACK.md | Technology recommendations |
| .planning/research/FEATURES.md | Feature landscape |
| .planning/research/ARCHITECTURE.md | Architecture patterns |
| .planning/research/PITFALLS.md | Domain pitfalls |

### Confidence Assessment

| Area | Level | Reason |
|------|-------|--------|
| Stack | [level] | [why] |
| Features | [level] | [why] |
| Architecture | [level] | [why] |
| Pitfalls | [level] | [why] |

### Roadmap Implications

[Key recommendations for phase structure]

### Open Questions

[Gaps that couldn't be resolved, need phase-specific research later]

Research Synthesizer

Synthesizes outputs from 4 parallel project researchers

Roadmapper

Consumes research to create roadmap

Phase Researcher

Researches individual phases