Skip to main content

Introduction

Welcome to the CheckThat AI API - an advanced platform for claim normalization, fact-checking, and LLM output evaluation. Our API provides OpenAI-compatible endpoints with enhanced features for improving the accuracy and reliability of AI-generated claims.

Base URL

https://api.checkthat-ai.com
For local development:
http://localhost:8000

API Endpoints

CheckThat AI provides several categories of endpoints:

Chat Endpoints

  • POST /chat - Real-time chat interface for claim normalization
  • POST /v1/chat/completions - OpenAI-compatible chat completions with CheckThat AI enhancements

Model Endpoints

  • GET /v1/models - List all available LLM models across providers

Health Endpoints

  • GET / - Root endpoint with API information
  • GET /health - Health check endpoint

Supported LLM Providers

CheckThat AI supports multiple LLM providers:

OpenAI

GPT-5, GPT-5 nano, o3, o4-mini

Anthropic

Claude Sonnet 4, Claude Opus 4.1

Google

Gemini 2.5 Pro, Gemini 2.5 Flash

xAI

Grok 3, Grok 4, Grok 3 Mini

Together AI

Llama 3.3 70B, DeepSeek R1

Rate Limiting

To ensure fair usage and service availability, CheckThat AI implements rate limiting:
Rate Limits
object

Rate Limit Headers

All responses include rate limit information:
X-RateLimit-Limit: 10
X-RateLimit-Remaining: 9
X-RateLimit-Reset: 1709567890

Rate Limit Exceeded Response

When rate limit is exceeded, you’ll receive a 429 status code:
{
  "error": "Rate limit exceeded",
  "message": "You've exceeded the rate limit for Chat endpoints. Please wait 45 seconds before trying again.",
  "details": {
    "limit": "10 requests per 60 seconds",
    "retry_after": 45,
    "endpoint": "/chat",
    "client_ip": "192.168.1.1"
  },
  "help": "This helps us keep the service available for everyone."
}

Response Format

All API responses follow standard formats:

Success Response

Successful requests return appropriate data based on the endpoint, typically following OpenAI’s response structure for compatibility.

Error Response

Error responses include detailed information:
{
  "error": "Error Type",
  "message": "Human-readable error message",
  "details": {
    "field": "Additional context"
  },
  "type": "error_category"
}

HTTP Status Codes

200
OK
Request successful
400
Bad Request
Invalid request parameters or malformed JSON
401
Unauthorized
Missing or invalid API key/authentication token
403
Forbidden
Authentication valid but insufficient permissions
422
Unprocessable Entity
Request validation failed
429
Too Many Requests
Rate limit exceeded
500
Internal Server Error
Unexpected server error occurred

Getting Started

1. Choose Your Authentication Method

CheckThat AI supports two authentication approaches:
  • API Key Authentication - Use your OpenAI/Anthropic/etc. API key directly
  • Bearer Token Authentication - For /v1/chat/completions endpoint
See the Authentication page for details.

2. Make Your First Request

Simple chat request:
curl -X POST https://api.checkthat-ai.com/chat \
  -H "Content-Type: application/json" \
  -d '{
    "user_query": "The Earth is flat and the moon landing was faked.",
    "model": "gpt-4o",
    "api_key": "sk-..."
  }'

3. Explore Advanced Features

CheckThat AI offers enhanced features beyond standard LLM APIs:
  • Claim Refinement - Automatically improve claim quality through iterative evaluation
  • Post-Normalization Evaluation - Assess output quality with custom metrics
  • Multi-Provider Support - Seamlessly switch between LLM providers
  • Streaming Support - Real-time response streaming

CORS Configuration

The API implements endpoint-specific CORS policies:

Public API Endpoints (/v1/*)

  • Accepts requests from all origins (*)
  • Suitable for client-side applications

Restricted Endpoints (/chat)

  • Limited to specific domains:
    • https://www.checkthat-ai.com
    • https://checkthat-ai.com
    • https://nikhil-kadapala.github.io

API Versioning

The current API version is v1.0.0. Version information is included in all responses:
curl https://api.checkthat-ai.com/
{
  "message": "This is the CheckThat AI backend root API endpoint...",
  "version": "1.0.0"
}

SDKs and Libraries

CheckThat AI is compatible with OpenAI SDKs:
from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="https://api.checkthat-ai.com/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Normalize this claim..."}]
)

Next Steps

Authentication

Learn how to authenticate your API requests

Chat Completions

Create chat completions with CheckThat AI features

Health Checks

Monitor API health and availability

Batch Processing

Process multiple claims efficiently

Build docs developers (and LLMs) love