Trigger Phrases
“find companies”, “build a list”, “company search”, “prospect list”, “target accounts”, “outbound list”, “discover companies”, “ICP search”, “lookalike search”, “seed company”Official API Reference
Decision Tree
Before running any queries, determine the right approach:Before You Start
Read the company context file if it exists:- ICP profiles - for query design and filters
- Win cases - for seed companies in lookalike mode
- DNC list - domains to exclude from results
claude-code-gtm/context/{vertical-slug}/hypothesis_set.md. If it exists, use the Search angle field from each hypothesis to design search queries - these are pre-defined query suggestions tailored to each pain point.
Environment
| Variable | Service |
|---|---|
EXTRUCT_API_TOKEN | Extruct API |
https://api.extruct.ai/v1
Method 1: Lookalike Search
Use when you have a seed company (from win cases, existing customers, or user input). Endpoint:GET /companies/{identifier}/similar where identifier is a domain or company UUID.
Key Parameters
JSON with
include (size, country) and range (founded)Max results (up to 200)
For pagination
Response Fields
name, domain, short_description, founding_year, employee_count, hq_country, hq_city, relevance_score
When to Use Lookalike
- You have a happy customer and want more like them
- Context file has win cases with domains
- User says “find companies similar to X”
Tips
- Run multiple similar-company searches with different seed companies for broader coverage
- Combine with filters to constrain geography or size
- Deduplicate across runs by domain
- Default to
limit=100; increase up to200when broader coverage is needed
Method 2: Semantic Search - Fast, Broad
Endpoint:GET /companies/search
Key Parameters
Natural language query describing the target companies
JSON with
include (size, country) and range (founded)Max results (up to 200)
Response Fields
name, domain, short_description, founding_year, employee_count, hq_country, hq_city, relevance_score
Query Strategy
- Write 3-5 queries per campaign, each from a different angle on the same ICP
- Describe the product/use case, not the company type
- Deduplicate across queries by domain - overlap is expected
- Default to
limit=100per query; increase up to200when needed - Target 200-800 companies total across all queries
Method 3: Discovery API - Deep, Qualified
Endpoint:POST /discovery_tasks
Key Parameters
2-3 sentence description of the ideal company (like a job description)
Target result count
List of
{ key, name, criterion } objects for auto-grading (up to 5)Polling
Poll:GET /discovery_tasks/{task_id} - status: created | in_progress | done | failed. Poll every 60 seconds.
Fetch results: GET /discovery_tasks/{task_id}/results with limit and offset params.
Response Fields
company_name, company_website, company_description, relevance (0-100), scores (per-criteria grade 1-5 with explanation), founding_year
Query Strategy
- Write queries like a job description - 2-3 sentences describing the ideal company
- Use criteria to auto-qualify - each company gets graded 1-5 per criterion
- Default
desired_num_results=50for first pass; expand after quality review - Use up to 5 criteria per task; keep criteria focused and non-overlapping
- Run separate tasks for different ICP segments
- Scans many candidates to find qualified matches - runtime depends on query scope
- Up to 250 results per task
Upload to Table
Create acompany kind table via POST /tables with a single input column (kind: "input", key: "input"). Extruct auto-enriches each domain with a Company Profile.
Upload domains in batches of 50 via POST /tables/{table_id}/rows. Each row: { "data": { "input": "domain.com" } }. Add 0.5s delay between batches.
Pass "run": true in the rows payload to trigger agent columns on upload.
Re-run After Enrichment
After the
list-enrichment skill adds data points to this list, consider re-running list building using enrichment insights as Discovery criteria. For example:- If enrichment reveals that “companies using legacy ERP” are the best fit, create a Discovery task with that as a criterion
- If enrichment shows a geographic cluster, run a Search with tighter geo filters
Result Size Guidance
| Campaign stage | Target list size | Method |
|---|---|---|
| Exploration | 50-100 | Search (2-3 queries) |
| First campaign | 200-500 | Search (5 queries) + Discovery |
| Scaling | 500-2000 | Discovery (high desired_num_results) + multiple Search |
Workflow
Verify API reference
- Read local references for Discovery API and search filters
- Fetch live docs: https://www.extruct.ai/docs
- Compare endpoints, params, and response fields
- If discrepancies found, update local reference files and flag changes to user
Read context and decide method
- Read context file for ICP, seed companies, and DNC list
- Follow the decision tree to pick the right method
Upload to table
- Upload to Extruct company table for auto-enrichment
- Add agent columns if user needs custom research