Campaign Feedback Loops
The best campaigns get better over time. This guide shows you how to import campaign results back into your context file to refine hypotheses, update proof points, and improve future campaigns.The Feedback Loop
- Which pain points resonate (hypothesis validation)
- Which proof points drive replies (proof library tuning)
- Which roles engage (ICP refinement)
- Which email structures work (voice evolution)
When to Run Feedback Loop
Import results after your campaign has run for 1-2 weeks with at least 50 sends. Earlier than that: not enough signal. You need:- Open rate data
- Reply data (positive, negative, neutral)
- Bounces and unsubscribes
- Which hypothesis/tier each email used
Data Sources
You can import from:- Email sequencer exports (Instantly, Smartlead, Lemlist)
- Manual tracking (spreadsheet with opens/replies)
- Pasted reply threads (for qualitative analysis)
Instantly Export
Go to your campaign → Export leads → Download CSV with:- Email address
- Campaign status (Completed, Active, Paused)
- Lead status (Interested, Not Interested, No Reply)
- Email opened (Yes/No)
- Replied (Yes/No)
Smartlead Export
Campaigns → Select campaign → Export → Include:- Lead details
- Sequence stats
- Reply status
Workflow: Import Results
Trigger Feedback Loop Mode
Ask Claude to run context-building in feedback loop mode.Or paste results directly:
Claude Extracts Metrics
Claude reads your data and extracts:
- Campaign name, vertical, list size
- Open rate, reply rate, positive reply rate
- Which hypotheses got replies (from tier/hypothesis columns in your emails CSV)
- Patterns in positive vs negative replies
- Any new pain points mentioned in replies
Validate or Retire Hypotheses
Claude updates Demote to Retired:
## Active Hypotheses based on performance.Promote to Validated:Update Proof Library
If campaign results surface new proof points, Claude adds them.New win case from a reply:Remove proof points that didn’t resonate:
Extract New Hypotheses from Replies
If replies mention pain points you didn’t predict, Claude surfaces them.Example reply:
“Interesting timing—we’re actually struggling with duplicate suppliers in our database. Same company, 3 different profiles because of name variations. Do you handle entity resolution?”Claude suggests:
What Gets Updated
Feedback loop mode touches these sections in your context file:| Section | Update Logic |
|---|---|
| Campaign History | Add new row with metrics + learnings |
| Active Hypotheses | Promote (validated), demote (retired), or add (new from replies) |
| Proof Library | Add new proof points from wins, remove non-performers |
| ICP | Refine role targeting based on reply patterns |
| Voice | Update hard constraints if replies indicate confusion |
Analyzing Replies: Positive vs Negative
Positive Reply Patterns
Look for:- Pain confirmation: “Yes, we see this problem”
- Detail requests: “How does your API work?”
- Timing signals: “Interesting timing, we’re evaluating…”
- Forward/intro offers: “Let me connect you with our VP Product”
- Which hypothesis they confirmed
- Language they used (add to hypothesis description)
- Any new pain points they mentioned
Negative Reply Patterns
Look for:- Irrelevant: “We don’t have this problem”
- Wrong persona: “You should talk to [different role]”
- Timing: “Not a priority right now”
- Competitive: “We built this in-house”
- Hypothesis mismatch (refine targeting)
- Role mismatch (update ICP)
- Positioning issues (update voice/value prop)
Example: Full Feedback Loop
Campaign Results
Breakdown by Hypothesis
| Hypothesis | Sends | Replies | Reply Rate |
|---|---|---|---|
| #1 Vendor coverage gaps (APAC/LATAM) | 80 | 8 | 10% |
| #2 Vendor onboarding efficiency | 60 | 2 | 3% |
| #3 Compliance data gaps | 40 | 1 | 2.5% |
Breakdown by Role
| Role | Sends | Replies | Reply Rate |
|---|---|---|---|
| VP Product | 70 | 6 | 8.6% |
| Chief Product Officer | 50 | 4 | 8% |
| Head of Data | 35 | 1 | 2.9% |
| VP Engineering | 25 | 0 | 0% |
Context File Updates
Campaign History:Multi-Campaign Analysis
After 3-5 campaigns, patterns emerge across verticals.Cross-Vertical Learnings
- Founder/CEO/CPO titles > VP titles across all campaigns
- Specific keywords (“emerging”, “APAC”, “decay”) drive engagement
- Engineering roles don’t reply to GTM pitches (even for technical products)
- Larger companies (500+ employees) have higher reply rates (better data, clearer pain)
- Update ICP to prioritize Founder/CEO/CPO
- Build a “high-signal keywords” library
- Remove engineering roles from people search
- Increase list-building employee filters (100+ → 200+)
Hypothesis Library Evolution
Hypothesis Lifecycle
After 10 Campaigns
You’ll have:- 5-7 validated hypotheses (work across verticals)
- 10-15 vertical-specific hypotheses (validated per segment)
- 20+ retired hypotheses (learned what doesn’t work)
Call Recording Integration
If a reply turns into a call, import the transcript to extract deeper signals.- ICP signals: What Jane cares about, her workflow, team structure
- Win case data: If they become a customer, capture their use case
- Proof point candidates: Specific results or quotes
- Hypothesis validation: Which pain points Jane confirmed
- Voice feedback: Reaction to positioning or language
Automation: Export → Import
For high-volume campaigns, automate the export → import step.Instantly Webhook → Claude Code
- Set up Instantly webhook for campaign completion
- Trigger a script that downloads the CSV export
- Call Claude Code API with:
Best Practices
Next Steps
- Your First Campaign - Run your first campaign to generate feedback
- Managing Multiple Verticals - Apply learnings across segments
- Environment Setup - Configure API credentials for exports