Available Prompting Strategies
CheckThat AI implements three core prompting approaches:Zero-Shot
Direct claim extraction without examples
Few-Shot
Learning from provided examples
Chain-of-Thought
Step-by-step reasoning process
Zero-Shot Prompting
Zero-shot prompting provides direct instructions without examples. Best for simple, straightforward claims.How It Works
The system uses the base instruction prompt defined inprompts.py:55:
- Sentence Splitting: Break down the post into individual sentences
- Selection: Identify verifiable information
- Disambiguation: Resolve referential and structural ambiguity
- Decomposition: Extract self-contained propositions
Example API Call
When to Use Zero-Shot
Best For
- Simple, clear statements
- News headlines
- Direct factual claims
- Quick processing needs
Avoid When
- Complex context required
- Ambiguous language
- Domain-specific jargon
- Subtle misinformation
Few-Shot Prompting
Few-shot prompting provides examples to guide the model’s behavior. The system includes pre-configured examples inprompts.py:59-118.
Example Structure
CheckThat AI includes 5 carefully curated examples covering:- Medical/institutional claims
- Celebrity/viral content
- Health misinformation
- Scientific false claims
- COVID-19 related content
Implementation Details
The few-shot prompt includes examples with clear structure:Example API Call with Few-Shot
When to Use Few-Shot
Use few-shot prompting when you need consistent output format or have domain-specific examples
- Establishing output format consistency
- Domain-specific terminology
- Complex extraction patterns
- Training on edge cases
Chain-of-Thought (CoT) Reasoning
Chain-of-Thought prompting adds explicit reasoning steps before the final claim. Implemented with the trigger phrase fromprompts.py:57:
How CoT Works
The enhanced few-shot CoT prompt (prompts.py:120-215) includes detailed reasoning:
Example API Call with CoT
When to Use Chain-of-Thought
Best for:- Complex ambiguous statements
- Multi-part claims requiring decomposition
- Cases needing explicit context resolution
- High-stakes fact-checking scenarios
Strategy Comparison
- Performance
- Use Cases
- Code Examples
| Strategy | Speed | Accuracy | Token Usage | Cost |
|---|---|---|---|---|
| Zero-Shot | Fast | Good | Low | $ |
| Few-Shot | Medium | Better | Medium | $$ |
| Chain-of-Thought | Slow | Best | High | $$$ |
System Prompt Details
The core system prompt (ClaimNorm) fromprompts.py:3-53 includes:
View Full System Prompt Structure
View Full System Prompt Structure
Identity:
- ClaimNorm AI assistant
- Expert in claim detection, extraction, and normalization
- Sentence Splitting and Context Creation
- Split post into sentences
- Create context using 2 preceding and 2 following sentences
- Selection
- Determine if sentence contains verifiable information
- Rewrite to retain only verifiable parts
- Disambiguation
- Resolve referential ambiguity (unclear references)
- Resolve structural ambiguity (multiple interpretations)
- Discard if ambiguity cannot be resolved
- Decomposition
- Identify specific, verifiable propositions
- Ensure decontextualization (self-contained)
- Create simplest discrete units of information
- If no verifiable claims found, return extractive summary
- Maximum 25 words
- Preserve named entities
- Return as news headline style
Advanced Techniques
Combining Strategies
You can combine strategies for optimal results:Custom System Prompts
Override the default system prompt for specialized use cases:Best Practices
Start Simple
Begin with zero-shot, upgrade to few-shot if needed
Test Examples
Validate few-shot examples match your domain
Monitor Costs
CoT uses 2-3x more tokens than zero-shot
Measure Quality
Use G-Eval metrics to compare strategies
Next Steps
Self-Refine
Iteratively improve claim quality
Custom Evaluation
Define your own quality metrics