Skip to main content

Get started in 5 minutes

This quickstart guide will help you get CheckThat AI running locally as quickly as possible. You’ll set up both the frontend and backend, configure your API keys, and make your first claim normalization request.
This guide uses the automated setup scripts. For manual installation or troubleshooting, see the Installation guide.

Prerequisites

Before you begin, ensure you have the following installed: You’ll also need API keys for at least one AI provider:
Keep your API keys secure and never commit them to version control. The setup scripts use environment variables to protect your credentials.

Installation steps

1

Clone the repository

Clone the CheckThat AI repository to your local machine:
git clone https://github.com/nikhil-kadapala/clef2025-checkthat-lab-task2.git
cd clef2025-checkthat-lab-task2
2

Configure API keys

Set your API keys as environment variables. Choose the appropriate commands for your operating system:Linux/macOS:
export OPENAI_API_KEY="your-openai-key-here"
export ANTHROPIC_API_KEY="your-anthropic-key-here"
export GEMINI_API_KEY="your-gemini-key-here"
export GROK_API_KEY="your-grok-key-here"
Windows (PowerShell):
$env:OPENAI_API_KEY="your-openai-key-here"
$env:ANTHROPIC_API_KEY="your-anthropic-key-here"
$env:GEMINI_API_KEY="your-gemini-key-here"
$env:GROK_API_KEY="your-grok-key-here"
You only need to set API keys for the providers you plan to use. The free Llama 3.3 70B model from Together AI works without an API key.
3

Run the setup script

Execute the automated setup script. This only needs to be done once:
chmod +x setup-project.sh  # Make script executable (Linux/macOS only)
./setup-project.sh
The script will:
  • Detect your operating system
  • Terminate any conflicting processes on port 5173
  • Install Node.js dependencies for the frontend
  • Fix npm vulnerabilities automatically
  • Create a Python virtual environment
  • Install Python dependencies
  • Handle cross-platform compatibility
This process typically takes 2-3 minutes depending on your internet connection.
4

Start the application

Launch both the frontend and backend servers:
./run-project.sh
The script will start:
  • Frontend: http://localhost:5173
  • Backend API: http://localhost:8000
You’ll see output indicating both servers are running:
Frontend running on http://localhost:5173
Backend running on http://localhost:8000
Press Ctrl+C to stop...
5

Open the web interface

Open your browser and navigate to:
http://localhost:5173
You should see the CheckThat AI web interface with a chat interface ready to normalize claims.

Make your first API call

Now that CheckThat AI is running, let’s normalize a claim using the API.

Using the web interface

The easiest way to get started is through the web interface:
  1. Open http://localhost:5173 in your browser
  2. Select a model from the dropdown (try “Llama 3.3 70B” - it’s free!)
  3. Enter a claim in the chat input, for example:
    The government is hiding alien technology!
    
  4. Press Enter and watch the normalized claim stream in real-time

Using curl

You can also call the API directly using curl:
curl -X POST "http://localhost:8000/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-api-key-here" \
  -d '{
    "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
    "messages": [
      {
        "role": "system",
        "content": "You are a claim normalization assistant. Transform informal claims into clear, verifiable statements."
      },
      {
        "role": "user",
        "content": "The government is hiding alien technology!"
      }
    ],
    "stream": false
  }'
Replace your-api-key-here with your actual API key for the chosen model provider. For Llama 3.3 70B via Together AI, no API key is required in local development.

Using Python

You can also use Python with the requests library:
import requests
import json

url = "http://localhost:8000/v1/chat/completions"

headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer your-api-key-here"
}

payload = {
    "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
    "messages": [
        {
            "role": "system",
            "content": "You are a claim normalization assistant."
        },
        {
            "role": "user",
            "content": "The government is hiding alien technology!"
        }
    ],
    "stream": False
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()

print("Normalized claim:")
print(result["choices"][0]["message"]["content"])

Expected output

When you submit the example claim, you should receive a normalized response similar to:
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Allegations have been made that government entities are concealing information about extraterrestrial technology."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 45,
    "completion_tokens": 23,
    "total_tokens": 68
  }
}
Notice how the informal claim has been transformed into a neutral, verifiable statement suitable for fact-checking.

Try different models

CheckThat AI supports multiple AI models. Here’s how to use different providers:

OpenAI GPT models

curl -X POST "http://localhost:8000/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-5-2025-08-07",
    "messages": [{"role": "user", "content": "COVID vaccines contain microchips"}]
  }'

Anthropic Claude

curl -X POST "http://localhost:8000/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $ANTHROPIC_API_KEY" \
  -d '{
    "model": "claude-opus-4-1-20250805",
    "messages": [{"role": "user", "content": "Climate change is a hoax"}]
  }'

Google Gemini

curl -X POST "http://localhost:8000/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GEMINI_API_KEY" \
  -d '{
    "model": "gemini-2.5-pro",
    "messages": [{"role": "user", "content": "5G towers cause coronavirus"}]
  }'

Streaming responses

For real-time claim normalization with streaming, set "stream": true:
curl -X POST "http://localhost:8000/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-api-key-here" \
  -d '{
    "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
    "messages": [{"role": "user", "content": "The moon landing was fake"}],
    "stream": true
  }'
The response will stream chunks in Server-Sent Events (SSE) format:
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"Claims"}}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":" have"}}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":" been"}}]}

data: [DONE]

Batch evaluation

For processing multiple claims at once, use the web interface’s batch evaluation feature:
  1. Navigate to the “Batch Evaluation” tab at http://localhost:5173
  2. Upload a CSV file with claims in the format:
    claim,reference
    "Claim text 1","Reference normalization 1"
    "Claim text 2","Reference normalization 2"
    
  3. Select your evaluation strategy (Zero-shot, Few-shot, Chain-of-Thought, etc.)
  4. Choose one or more models
  5. Click “Start Evaluation”
  6. Watch real-time progress via WebSocket updates
  7. Download results with METEOR scores

Next steps

Now that you have CheckThat AI running, explore these resources:

API Reference

Explore all available endpoints and parameters

Installation Guide

Learn about manual installation and advanced configuration

Evaluation Methods

Understand different normalization strategies

Model Guide

Compare available AI models and choose the best for your use case

Troubleshooting

If you encounter issues during quickstart:

Port already in use

The setup script automatically terminates processes on port 5173, but if you still have conflicts:
# Linux/macOS
lsof -ti:5173 | xargs kill -9

# Windows
netstat -ano | findstr :5173
taskkill /PID <PID> /F

API key not working

  • Verify your API key is correct and has sufficient quota
  • Ensure environment variables are set before running scripts
  • Check for typos in variable names (they’re case-sensitive)

Module not found errors

# Reinstall dependencies
cd src/app
npm install

# Reactivate virtual environment and reinstall Python packages
source .venv/bin/activate  # Linux/macOS
# .venv\Scripts\activate  # Windows
pip install -r requirements.txt
For more detailed troubleshooting, see the Installation guide.

Stop the application

To stop both servers, press Ctrl+C in the terminal where run-project.sh is running. The script will gracefully shut down both the frontend and backend.

Build docs developers (and LLMs) love