Overview
The fact checking guardrail validates bot responses against retrieved evidence (relevant chunks from your knowledge base). It:- Checks if bot responses are accurate relative to the evidence
- Prevents the bot from making unsupported claims
- Works with RAG (Retrieval-Augmented Generation) systems
- Requires explicit activation per response
Quick Start
How It Works
The fact checking rail:- Checks if
$check_factsis set toTrue - Retrieves the bot’s response and relevant evidence chunks
- Prompts the LLM to assess factual accuracy
- Returns a score from 0.0 (inaccurate) to 1.0 (accurate)
- Blocks responses with accuracy below 0.5
Configuration
Basic Configuration
config.yml
Activating Fact Checking
Fact checking must be explicitly activated:flows.co
If
$check_facts is not set to True, the fact checking rail does nothing. This allows you to selectively enable fact checking only when needed.Evidence Requirements
The fact checker requires$relevant_chunks to be populated:
True (allows the response).
Accuracy Threshold
The default threshold is 0.5. Responses with accuracy below this are blocked:Behavior
When a fact check fails (accuracy < 0.5):With Rails Exceptions
config.yml
FactCheckRailException with a message about the failed check.
Without Rails Exceptions
The bot refuses to respond and aborts the conversation.Context Variables
The fact checking rail uses: Input:$relevant_chunks- Evidence from knowledge base (list of strings)$bot_message- The bot’s generated response$check_facts- Boolean flag to enable checking
$accuracy- Float from 0.0 to 1.0 representing factual accuracy$check_facts- Reset toFalseafter checking
Custom Flows
Create custom fact checking flows:flows.co
Integration with RAG
Typical RAG flow with fact checking:flows.co
Temperature Settings
Fact checking uses the lowest possible temperature for consistency:Implementation Details
The fact checking flows are defined in:/nemoguardrails/library/self_check/facts/flows.co/nemoguardrails/library/self_check/facts/actions.py
SelfCheckFactsAction- Performs the fact check using LLM
self_check_facts task prompt, which you can customize in prompts.yml.
Custom Task Prompts
Customize the fact checking prompt:prompts.yml
Best Practices
- Enable selectively - Only use fact checking for knowledge-based responses
- Provide good evidence - Ensure
$relevant_chunkscontains relevant, high-quality information - Handle failures gracefully - Consider warning users instead of always blocking
- Test threshold - The default 0.5 may need adjustment for your use case
- Monitor performance - Fact checking adds latency; consider caching strategies
Limitations
- Requires evidence chunks to be available
- Adds latency due to additional LLM call
- May have false positives/negatives depending on evidence quality
- Works best with clear, factual content (not opinions or creative responses)