Skip to main content
This example shows how to start from a single successful customer, analyze what makes them a good fit, and build a targeted outbound campaign to similar companies.

Scenario

You’re building a company intelligence API (like Extruct), and you have one customer — a procurement automation company that uses your API to score potential suppliers. You want to find more companies like them.

Starting Prompt

I'm building www.extruct.ai.
One of my customers is www.customer-example.com,
they use us to score suppliers in their procurement workflow.
Find similar companies and plan a campaign.
1
Step 1: Context Building
2
Claude loads the context-building skill and starts gathering information about your product and ICP.
3
Claude asks:
4
I’ll help you build a campaign starting from this win case. First, let me capture some context.
  1. What’s your product’s core value proposition?
  2. Who typically sends these emails (name/role)?
  3. What tone do you prefer for outreach?
  4. Any words or phrases you never want to use?
5
You provide:
6
  • Product: API-first company search and lookalike engine
  • Sender: Danny from Extruct
  • Tone: Builder-to-builder, analytical, no hype
  • Banned words: “agents”, “revolutionize”, “game-changing”
  • 7
    Claude creates:
    8
    claude-code-gtm/
    └── context/
        └── extruct_context.md
    
    9
    The context file includes your win case, product info, voice rules, and ICP profile derived from the successful customer.
    10
    Step 2: Hypothesis Building
    11
    Claude uses the hypothesis-building skill to identify why this customer is a good fit and what pain points your product solves for them.
    12
    Claude analyzes:
    13
    Based on this win case, I see a few hypothesis angles:
    1. Procurement data gap — companies scoring vendors need company intelligence that’s not in their CRM
    2. Build vs buy for APIs — teams building features that need company data
    3. Legacy data quality — companies frustrated with stale or incomplete vendor databases
    Does this match your understanding? Any angles to add or adjust?
    14
    You confirm, and Claude saves:
    15
    claude-code-gtm/
    └── context/
        ├── extruct_context.md
        └── procurement-automation/
            └── hypothesis_set.md
    
    16
    Step 3: List Building
    17
    Claude uses the list-building skill in lookalike mode, starting from your seed customer’s domain.
    18
    Claude runs:
    19
    # Lookalike search from seed company
    curl -X GET "https://api.extruct.ai/v1/companies/customer-example.com/similar?limit=100" \
      -H "Authorization: Bearer $EXTRUCT_API_TOKEN"
    
    20
    Results: 94 similar companies found
    21
    Claude also runs semantic search for broader coverage:
    22
    # Query 1: Procurement workflow automation
    curl -X GET "https://api.extruct.ai/v1/companies/search" \
      -H "Authorization: Bearer $EXTRUCT_API_TOKEN" \
      -d '{
        "q": "procurement automation software that helps companies evaluate and score suppliers",
        "limit": 100
      }'
    
    # Query 2: Vendor management platforms  
    curl -X GET "https://api.extruct.ai/v1/companies/search" \
      -d '{
        "q": "vendor management and supplier qualification platforms",
        "limit": 100
      }'
    
    23
    Combined results: 267 unique companies (after deduplication)
    24
    Claude creates an Extruct table:
    25
    # Create table
    curl -X POST "https://api.extruct.ai/v1/tables" \
      -d '{
        "name": "Procurement Automation - Seed Campaign",
        "kind": "company"
      }'
    
    # Upload domains in batches
    curl -X POST "https://api.extruct.ai/v1/tables/tbl_abc123/rows" \
      -d '{
        "rows": [
          {"data": {"input": "domain1.com"}},
          {"data": {"input": "domain2.com"}}
        ]
      }'
    
    26
    Output:
    27
    Created table tbl_abc123 with 267 companies. View at: https://app.extruct.ai/tables/tbl_abc123
    28
    Step 4: Enrichment Design
    29
    Claude uses enrichment-design to propose research columns aligned with your hypotheses.
    30
    Claude proposes:
    31
    I’ll design enrichment columns to validate hypothesis fit: Column 1: Vendor Evaluation Workflow
    Type: select
    Options: Manual process, Basic tools, Automated platform, No formal process
    Why: Confirms procurement sophistication
    Column 2: Data Integration Signals
    Type: text
    Prompt: Check if mentions API integrations, data enrichment, or third-party data in their product description Why: Shows if they’re already comfortable with API-based solutions
    Column 3: Recent Procurement News
    Type: text
    Prompt: Find recent news or announcements from about procurement, vendor management, or supplier qualification features Why: Timing signal for outreach
    32
    You approve, and Claude generates the column configs.
    33
    Step 5: List Enrichment
    34
    Claude uses list-enrichment to add the designed columns to your table.
    35
    Claude runs:
    36
    # Add enrichment columns
    curl -X POST "https://api.extruct.ai/v1/tables/tbl_abc123/columns" \
      -d '{
        "kind": "agent",
        "name": "Vendor Evaluation Workflow",
        "key": "vendor_workflow",
        "value": {
          "agent_type": "research_pro",
          "prompt": "Research {input} and categorize their vendor evaluation process...",
          "output_format": "select",
          "labels": ["Manual process", "Basic tools", "Automated platform", "No formal process"]
        }
      }'
    
    # Trigger enrichment
    curl -X POST "https://api.extruct.ai/v1/tables/tbl_abc123/run"
    
    37
    Progress:
    38
    Enriching 267 companies across 3 columns… Estimated time: 8-10 minutes Status: 89/267 complete… Status: 178/267 complete… Status: 267/267 complete ✓
    39
    Step 6: List Segmentation
    40
    Claude uses list-segmentation to tier companies by fit and data quality.
    41
    Segmentation results:
    42
    ## Segmented List: Procurement Automation Campaign
    
    ### Tier 1 — 31 companies (11.6%)
    
    Strong hypothesis fit + rich enrichment data + clear hook
    
    | Company | Domain | Hypothesis | Hook Signal |
    |---------|--------|------------|-------------|
    | VendorMatch | vendormatch.io | #1 Procurement data gap | CEO quoted on "lack of real-time vendor intelligence" |
    | SupplyCore | supplycore.com | #1 Procurement data gap | Launched supplier scoring feature last month |
    | ProcureIQ | procureiq.com | #2 Build vs buy | Job posting for "API Integration Engineer" |
    
    ### Tier 2 — 164 companies (61.4%)
    
    Medium fit or data-rich without standout hook
    
    ### Tier 3 — 72 companies (27.0%)
    
    Weak fit or insufficient data
    
    43
    Claude saves:
    44
    claude-code-gtm/csv/input/procurement-seed-campaign/segmented_list.csv
    
    45
    Step 7: Email Prompt Building
    46
    Claude uses email-prompt-building to create a self-contained prompt template.
    47
    Claude reads:
    48
  • Context file (voice, value prop, proof points)
  • Hypothesis set
  • Enrichment column definitions
  • 49
    Claude builds:
    50
    # Email Prompt: Procurement Automation Campaign
    
    ## Voice Rules
    - Sender: Danny from Extruct
    - Tone: Builder-to-builder, analytical
    - Language: B2 English, simple sentences
    - Banned: agents, revolutionize, game-changing
    
    ## Structure
    P1: Personalized opener referencing {{vendor_workflow}} or {{procurement_news}}
    P2: Value prop + example query relevant to procurement use case
    P3: Social proof from similar company
    P4: Proof point (choose based on hypothesis)
    
    ## Hypothesis Mapping
    - If vendor_workflow = "Automated platform" → use #1 Procurement data gap angle
    - If mentions API in description → use #2 Build vs buy angle
    
    ...
    
    51
    Saved to:
    52
    claude-code-gtm/prompts/procurement-automation/email_template.md
    
    53
    Step 8: Email Generation
    54
    Claude uses email-generation to create personalized emails for Tier 1 and templated emails for Tier 2.
    55
    For Tier 1 company (VendorMatch):
    56
    Subject: Company intelligence for supplier scoring
    
    Hi [First Name],
    
    I saw your recent comments about the lack of real-time vendor intelligence 
    in procurement workflows. We've seen similar challenges with teams building 
    supplier scoring features.
    
    Extruct is an API for company search and data. You can query companies by 
    description and get structured profiles. Example:
    
    "Find B2B software vendors in supply chain with 50-200 employees founded 
    after 2015"
    
    A procurement platform similar to VendorMatch uses this to auto-score vendors 
    their customers are evaluating.
    
    Worth a look?
    
    Danny
    Extruct
    
    PS. The API is free for search (unlimited), $59/mo for 1K enrichment credits.
    
    57
    Output file:
    58
    claude-code-gtm/csv/output/procurement-seed-campaign/emails.csv
    
    59
    Columns: first_name, last_name, email, company_name, subject, email_body, tier, hypothesis
    60
    Step 9: Review & Send
    61
    Tier 1 Review (optional):
    62
    For your 31 Tier 1 emails, Claude can use email-response-simulation to simulate how the recipient might read the email.
    63
    Campaign Upload:
    64
    Claude uses campaign-sending skill to upload to your sequencer (e.g., Instantly).
    65
    # Upload to Instantly via API
    curl -X POST "https://api.instantly.ai/api/v1/lead/add" \
      -d '{
        "api_key": "$INSTANTLY_API_KEY",
        "campaign_id": "camp_xyz",
        "email": "[email protected]",
        "first_name": "...",
        "variables": {...}
      }'
    
    66
    Result:
    67
    Uploaded 195 contacts to Instantly campaign “Procurement Seed Campaign”
    • Tier 1: 31 contacts (Sequence A - personalized)
    • Tier 2: 164 contacts (Sequence B - templated)
    • Tier 3: Held for re-enrichment

    Final Artifacts

    claude-code-gtm/
    ├── context/
    │   ├── extruct_context.md
    │   └── procurement-automation/
    │       └── hypothesis_set.md
    ├── prompts/
    │   └── procurement-automation/
    │       └── email_template.md
    └── csv/
        ├── input/procurement-seed-campaign/
        │   └── segmented_list.csv
        └── output/procurement-seed-campaign/
            └── emails.csv
    

    Campaign Results

    After running the campaign for 2 weeks:
    • Tier 1: 31 sent, 8 replies (25.8% reply rate), 4 positive
    • Tier 2: 164 sent, 12 replies (7.3% reply rate), 3 positive
    • Top hypothesis: #1 Procurement data gap (9 of 11 positive replies)
    Next step: Update context file with campaign results using the context-building skill in feedback loop mode.

    Build docs developers (and LLMs) love