Campaign Phases
A typical campaign has four phases:Phase 1: Foundation
Build Company Context
Skill: context-building Create the global context file that all other skills read from.Example Prompt
- Claude reads your website to extract product info and value prop
- Asks for voice rules, ICP profiles, win cases
- Creates
/claude-code-gtm/context/extruct_context.md
- Single context file with ICP, voice, win cases, proof library
- This file is referenced by every downstream skill
Phase 2: Research
Option A: Fast Hypothesis Building (No API Required)
Skill: hypothesis-building Generate pain hypotheses from your own knowledge + context file.Example Prompt
- Reads context file for ICP and win cases
- Asks what you know about the vertical
- Drafts 3-7 hypotheses with search angles
- Saves to
/claude-code-gtm/context/enterprise-saas/hypothesis_set.md
Option B: Deep Market Research (API Required)
Skill: market-research Validate hypotheses with external research APIs (e.g., Perplexity).Example Prompt
- Reads context file and runs research queries
- Extracts data points, tool comparisons, pain mechanisms
- Saves research + hypothesis set
/claude-code-gtm/context/enterprise-saas/sourcing_research.md/claude-code-gtm/context/enterprise-saas/hypothesis_set.md
hypothesis-building and market-research produce the same output format (hypothesis_set.md). Use whichever fits your workflow.Phase 3: List Building & Enrichment
Step 1: Find Companies
Skill: list-building Build prospect lists using Extruct API.Example Prompt
- Reads hypothesis set for search angles
- Runs lookalike/semantic/discovery searches
- Deduplicates and removes DNC domains
- Uploads to Extruct table for enrichment
- Extruct table with company profiles
- Optional local CSV at
/claude-code-gtm/csv/input/{campaign-slug}/companies.csv
Step 2: Design Enrichment Columns
Skill: enrichment-design Define what to research about each company.Example Prompt
- Reads hypothesis set
- Proposes 3-5 columns (segmentation + personalization)
- Refines with you interactively
- Outputs
column_configsJSON
- Segmentation: Columns that score hypothesis fit (e.g., “Data Infrastructure Maturity”)
- Personalization: Columns for email hooks (e.g., “Recent Product Launch”)
Step 3: Run Enrichment
Skill: list-enrichment Add research columns to the table.Example Prompt
- Creates agent columns in Extruct table
- Triggers enrichment run (research agents execute per row)
- Polls for progress and shows % complete
- Spot-checks quality with sample rows
Step 4: Segment by Fit
Skill: list-segmentation Tier companies by hypothesis fit and data richness.Example Prompt
- Reads hypothesis set and enrichment columns
- Scores each company on hypothesis match + data quality
- Assigns tier (1 = best fit, 3 = no match)
- Exports segmented CSV
/claude-code-gtm/csv/input/{campaign-slug}/companies_segmented.csv
- Tier 1: Strong hypothesis match + rich enrichment data → personalized emails
- Tier 2: Moderate match → hypothesis-based templates
- Tier 3: Weak match → exclude or re-research
Phase 4: Email Generation & Sending
Step 1: Find People
Skill: people-search Find decision makers at target companies.Example Prompt
- Reads company CSV
- Runs LinkedIn search via Extruct API
- Returns people with titles, LinkedIn URLs
Step 2: Get Contact Info
Skill: email-search Enrich with verified emails and phone numbers.Example Prompt
- Reads people CSV
- Calls contact enrichment provider (Prospeo, FullEnrich)
- Returns verified emails, phones, social profiles
Step 3: Build Email Prompt Template
Skill: email-prompt-building Create a self-contained prompt for email generation.Example Prompt
- Reads context file (voice, proof library, value prop)
- Reads research + hypothesis set
- Synthesizes everything into a self-contained prompt
- Saves to
/claude-code-gtm/prompts/enterprise-saas/en_first_email.md
- Voice rules and banned words (from context file)
- Hypothesis-based P1 angles (from research)
- P2 value angles (from context → What We Do)
- P4 proof points (from context → Proof Library)
- Structural variants (by role and seniority)
Step 4: Generate Emails
Skill: email-generation Run the prompt template against each contact.Example Prompt
- Reads prompt template + contact CSV
- Applies prompt per row with enrichment data
- Outputs email JSON per contact
- Saves to
/claude-code-gtm/csv/output/{campaign-slug}/emails.csv
- Tier 1: Individual attention, routed to simulation for review
- Tier 2: Batched by hypothesis group
- Tier 3: Skipped
Step 5: Review Tier 1 Emails (Optional)
Skill: email-response-simulation Simulate how prospects will read your emails.Example Prompt
- Reads generated emails
- Simulates persona reading the email
- Flags issues: buzzwords, weak hooks, generic copy
- Suggests rewrites constrained by voice rules
Step 6: Upload to Sequencer
Skill: campaign-sending Upload leads to email sequencer (e.g., Instantly).Example Prompt
- Reads context file for DNC list
- Filters contacts against DNC
- Uploads to sequencer via API
- Returns upload summary
Feedback Loop
After the campaign runs, update your context file with learnings:Example Prompt
- Imports campaign metrics (reply rate, positive replies)
- Updates Campaign History table in context file
- Promotes/retires hypotheses based on performance
- Adds new proof points from positive replies
Example: Full Campaign in One Prompt
Plan mode can run the entire workflow:- Build context file from extruct.ai
- Generate hypothesis set for “enterprise SaaS platforms”
- Run lookalike search from salesforce.com
- Design + run enrichment columns
- Segment into tiers
- Find VPs of Product
- Get emails
- Build prompt template
- Generate emails
- Export for sending
What Gets Created
After a full campaign, your file structure looks like:Next Steps
Context Files
Deep dive on the global context file schema
Campaign Artifacts
Understand directory structure and file organization
Browse Skills
Explore all 13 skills and their capabilities
Quick Start
Try your first campaign