Skip to main content

Detect hallucinations in AI responses with confidence

PAS2 (Paraphrase-based Approach for Scrutinizing Systems) is a powerful hallucination detection system that uses semantic paraphrasing and multi-model verification to identify factual inconsistencies in LLM responses.

How it works

PAS2 sends semantically equivalent variations of your query to an LLM, then uses a judge model to analyze the responses for factual inconsistencies. When an AI hallucinates, it often gives different answers to the same question asked in different ways.

Quick start

Get up and running with PAS2 in under 5 minutes

How it works

Understand the paraphrase-based detection system

API reference

Explore the complete API documentation

Configuration

Customize detection parameters and models

Key features

Multi-model architecture

PAS2 uses Mistral Large to generate responses and OpenAI’s o3-mini as an independent judge to detect hallucinations. This separation ensures unbiased analysis.

Paraphrase generation

Automatically generates semantically equivalent variations of queries using Mistral’s JSON mode. Each paraphrase preserves the original meaning while varying structure and wording.

Real-time progress tracking

Visual feedback during analysis with detailed progress updates for paraphrase generation, response collection, and judgment phases.

Detailed analysis output

{
  "hallucination_detected": True,
  "confidence_score": 0.87,
  "conflicting_facts": [
    {
      "fact": "Moon landing date",
      "variation_1": "July 20, 1969",
      "variation_2": "July 21, 1969"
    }
  ],
  "reasoning": "Responses show inconsistent dates for the moon landing...",
  "summary": "Factual inconsistency detected in temporal information"
}

Persistent feedback storage

Built-in SQLite database stores detection results and user feedback with support for Hugging Face Spaces persistent storage.

Use cases

QA systems

Validate factual accuracy in customer support bots

Content generation

Verify consistency in AI-generated articles

Research tools

Ensure reliability in AI research assistants

Educational apps

Detect misinformation in tutoring systems

Data extraction

Validate AI-extracted facts from documents

Fact-checking

Cross-verify AI claims automatically

Get started

1

Install dependencies

pip install mistralai openai gradio pydantic
2

Set API keys

export MISTRAL_API_KEY="your_mistral_key"
export OPENAI_API_KEY="your_openai_key"
3

Run your first detection

from pas2 import PAS2

detector = PAS2()
results = detector.detect_hallucination("Who was the first person on the moon?")
print(results["hallucination_detected"])  # True or False
PAS2 requires API keys for both Mistral AI and OpenAI. The system uses parallel API calls for efficient processing.

Next steps

Ready to dive deeper? Check out the Quick start guide to run your first hallucination detection, or explore How it works to understand the technical details.

Build docs developers (and LLMs) love