Get started in 5 minutes
This quickstart guide will help you get CheckThat AI running locally as quickly as possible. You’ll set up both the frontend and backend, configure your API keys, and make your first claim normalization request.This guide uses the automated setup scripts. For manual installation or troubleshooting, see the Installation guide.
Prerequisites
Before you begin, ensure you have the following installed:- Node.js v18 or higher (download)
- Python v3.8 or higher (download)
- Git for cloning the repository (download)
- OpenAI API key (for GPT models)
- Anthropic API key (for Claude models)
- Google AI API key (for Gemini models)
- xAI API key (for Grok models)
Installation steps
Configure API keys
Set your API keys as environment variables. Choose the appropriate commands for your operating system:Linux/macOS:Windows (PowerShell):
You only need to set API keys for the providers you plan to use. The free Llama 3.3 70B model from Together AI works without an API key.
Run the setup script
Execute the automated setup script. This only needs to be done once:The script will:
- Detect your operating system
- Terminate any conflicting processes on port 5173
- Install Node.js dependencies for the frontend
- Fix npm vulnerabilities automatically
- Create a Python virtual environment
- Install Python dependencies
- Handle cross-platform compatibility
Start the application
Launch both the frontend and backend servers:The script will start:
- Frontend:
http://localhost:5173 - Backend API:
http://localhost:8000
Make your first API call
Now that CheckThat AI is running, let’s normalize a claim using the API.Using the web interface
The easiest way to get started is through the web interface:- Open
http://localhost:5173in your browser - Select a model from the dropdown (try “Llama 3.3 70B” - it’s free!)
- Enter a claim in the chat input, for example:
- Press Enter and watch the normalized claim stream in real-time
Using curl
You can also call the API directly using curl:Replace
your-api-key-here with your actual API key for the chosen model provider. For Llama 3.3 70B via Together AI, no API key is required in local development.Using Python
You can also use Python with therequests library:
Expected output
When you submit the example claim, you should receive a normalized response similar to:Try different models
CheckThat AI supports multiple AI models. Here’s how to use different providers:OpenAI GPT models
Anthropic Claude
Google Gemini
Streaming responses
For real-time claim normalization with streaming, set"stream": true:
Batch evaluation
For processing multiple claims at once, use the web interface’s batch evaluation feature:- Navigate to the “Batch Evaluation” tab at
http://localhost:5173 - Upload a CSV file with claims in the format:
- Select your evaluation strategy (Zero-shot, Few-shot, Chain-of-Thought, etc.)
- Choose one or more models
- Click “Start Evaluation”
- Watch real-time progress via WebSocket updates
- Download results with METEOR scores
Next steps
Now that you have CheckThat AI running, explore these resources:API Reference
Explore all available endpoints and parameters
Installation Guide
Learn about manual installation and advanced configuration
Evaluation Methods
Understand different normalization strategies
Model Guide
Compare available AI models and choose the best for your use case
Troubleshooting
If you encounter issues during quickstart:Port already in use
The setup script automatically terminates processes on port 5173, but if you still have conflicts:API key not working
- Verify your API key is correct and has sufficient quota
- Ensure environment variables are set before running scripts
- Check for typos in variable names (they’re case-sensitive)
Module not found errors
Stop the application
To stop both servers, pressCtrl+C in the terminal where run-project.sh is running. The script will gracefully shut down both the frontend and backend.