Skip to main content
Generates genuinely novel ideas through five orthogonal forcing functions, adversarial novelty killing, and a structured mutation loop. The core insight: LLMs produce obvious ideas because they interpolate over existing content. This skill forces extrapolation by constraining the generation process structurally — each idea must derive from a specific mechanism (inversion, cross-domain transplant, edge-user design) rather than free association. Ideas that survive the novelty kill chain are presented with a full derivation chain anchored to real landscape data. Use this skill when you want non-obvious product concepts, research directions, startup ideas, or creative breakthroughs that don’t already exist.

Invocation

/deep-idea [domain or problem space]
Examples:
/deep-idea developer tools using LLMs
/deep-idea biotech for aging
/deep-idea games for blind players

How it differs from asking “give me ideas”

Standard generation: the model samples the mode of the distribution and produces what already exists. This skill: forces generation from mechanisms structurally unlikely to produce known ideas, then adversarially kills any that already exist, then mutates the generator when it gets stuck.

Model tier strategy

TierModelUsed for
ScouthaikuLandscape mapping, novelty killing
GeneratorsonnetIdea generation — Levels 0, 1, 2
Deep ReframeropusLevel 3+ only (reframe, inter-domain transplant)
Opus is never used at Levels 0–2. max_opus_calls = 30 is a hard ceiling that cannot be bypassed.

Hard ceilings (cannot be overridden by --auto)

CeilingValue
max_total_agent_calls200
max_opus_calls30
max_cycles_per_level5
max_reframes3

Workflow

1

Phase 0: Input validation

  • Scope check — if the domain is too vague (“ideas” alone), ask for a more specific space.
  • Constraint extraction — extract stated constraints (solo builder, specific tech, geography, timeline) and any ideas the user already wants excluded.
  • Shows a pre-run declaration with target survivors, suggested max_cycles, cost estimates, and hard ceilings. Waits for your confirmation before proceeding.
2

Phase 1: Landscape mapping

Spawns 3 parallel Scout agents (Haiku) to map what already exists before generating a single idea. This defines the novelty boundary.
  • Agent A — existing solutions, major players, well-known approaches
  • Agent B — recent launches (last 18 months) on ProductHunt, GitHub, arXiv, news
  • Agent C — known failed attempts and why they failed (graveyard research)
At least 2 of 3 Scouts must complete successfully before proceeding. The coordinator produces a landscape summary with five required fields: existing_solutions, core_assumptions, recent_enablers, failure_modes, unexplored_edges.
Ideas are never generated before the landscape map is complete. Skipping this step would remove the novelty boundary that the kill chain depends on.
After every 3 cycles, the landscape’s recent_enablers field is refreshed — new products may kill previously novel ideas.
3

Phase 2: Idea generation cycle

Each cycle spawns Generator agents in parallel using the five orthogonal forcing functions. The number of agents and active functions varies by mutation level:
Mutation levelAgentsActive generators
Level 05One per forcing function
Level 16Top-2 functions × 2 + NEGATION STACKER + MICRO-NICHER
Level 25Cross-function synthesis pairs
Level 33Opus reframe types
Level 45Opus inter-domain transplants
Each Generator must:
  • Produce exactly 1 idea per run (or report FORCING FUNCTION EXHAUSTED)
  • Provide a derivation chain with ≥3 causally-connected steps, each anchored to landscape data
  • Operate in isolation — generators do not share ideas during a cycle
A prospective gate fires before each cycle:
Cycle N | Mutation level: {level} | Survivors so far: M/target
Forcing functions: {list}
Agent calls so far: X/max_total | Opus calls: Y/max_opus
Continue? [y/N]
4

Phase 3: Novelty kill chain

For each idea from Phase 2, one Haiku killer agent runs in parallel. The killer performs four checks in order, stopping at the first failure:
  • N4 — Derivation chain evaluation (two passes): Pass 1 assesses the idea blind (before reading the chain). Pass 2 checks that the chain has ≥3 explicit steps anchored to landscape data and that only the specified forcing function could produce this idea.
  • N1 — Exact existence (4–5 searches): Searches for actively maintained products with the same mechanism and user type.
  • N2 — Structural clone (no new searches): Abstracts the idea’s structure and checks for domain-shifted copies.
  • N3 — Recency test (0–1 searches): Checks whether this could have been built 3+ years ago without a structural reason it wasn’t.
The killer’s response must begin with VERDICT: NOVEL|KILLED|FLAGGED. Unparseable responses are treated as KILLED PARSE_ERROR — never as NOVEL.FLAGGED ideas are presented as near-misses with the differentiation they would need to survive.
5

Phase 4: Mutation loop

After each cycle, the coordinator checks whether to escalate the mutation level:
  • Count survivors and FORCING FUNCTION EXHAUSTED signals
  • Track consecutive zero-survivor cycles at the current level
  • If stuck: escalate to the next mutation level per the escalation rules in LOOP.md
  • Hard ceilings are checked before every escalation — stop immediately if any ceiling is hit
When the target number of survivors is reached, the final report is written.
6

Phase 5: Final output

Writes deep-idea-report.md with all surviving ideas. Each idea includes: forcing function used, full derivation chain, core insight, concrete description, target user, structural reason it doesn’t exist yet, and a “why now” timing argument. The report also includes a cost summary by model tier.

Self-review checklist

Before presenting output, verify all of the following:
  • Landscape map completed before any generation cycle; at least 2 of 3 Scouts succeeded
  • Every surviving idea has a derivation chain with ≥3 explicit causally-connected steps anchored to landscape data
  • Every killed idea has a specific failed_check value (N1/N2/N3/N4/TIMEOUT/PARSE_ERROR) and specific evidence
  • N4 used two-pass evaluation (blind assessment before reading the chain)
  • No idea re-proposed after being killed — including the same mechanism under a new name
  • Mutation log accurate — escalation reason documented for each level change
  • Hard ceilings respected — no agent calls after any ceiling is hit
  • Level 3/4 Opus cycles had explicit prospective gates (not skipped by --auto)
  • Level 5 required user input (never auto-selected under --auto)
  • Final report includes cost summary (agent counts by tier)
  • Generator isolation maintained throughout each cycle

Golden rules

Never run a generation cycle without first completing the landscape map. The novelty kill chain depends on a concrete novelty boundary — without a landscape, killers have nothing to compare against.
Minimum 3 causally-connected steps, each anchored to specific landscape data. An idea without a chain is killed as lazy generation — not presented.
The killer’s job is to find this idea already existing. Its response must begin with VERDICT:. Anything else is treated as KILLED PARSE_ERROR. A killer that never kills anything has failed.
max_total_agent_calls, max_opus_calls, max_cycles_per_level, and max_reframes cannot be bypassed by --auto or any other mechanism.
“Try harder” is not mutation. Each level changes the structural mechanism of generation — the set of forcing functions, the synthesis strategy, or the framing entirely.
After every 3 cycles, refresh recent enablers. New products may kill previously novel ideas before they reach the final report.
FLAGGED ideas are valuable. Present them with the specific differentiation they would need to become genuinely novel.
Generators do not share ideas during a cycle. The only exception is Level 2, which receives coordinator-mediated exploration summaries as specified in LOOP.md.

Reference files

FileContents
FORCING.mdThe 5 forcing functions, NEGATION STACKER, MICRO-NICHER — how to generate from each, derivation chain requirements
NOVELTY.mdThe 4-check novelty kill chain, adversarial search protocol, structured output format, fail-safes
LOOP.mdMutation levels, hard ceilings, anti-give-up logic, --auto rules, escalation triggers
FORMAT.mdOutput format for surviving ideas