Overview
CheckThat supports OpenAI’s latest language models including GPT-5, o3, and o4-mini. These models provide state-of-the-art natural language understanding and generation capabilities with support for structured outputs, streaming responses, and conversation history.Available Models
The following OpenAI models are available through CheckThat:GPT-5 - OpenAI’s flagship model with advanced reasoning capabilities
GPT-5 nano - Lightweight version optimized for speed and efficiency
o3 - Advanced reasoning model optimized for complex problem-solving
o4-mini - Compact reasoning model balancing performance and cost
Configuration
API Key Setup
Your OpenAI API key. Get your key from OpenAI Platform.
The model identifier from the available models list above.
Request Parameters
OpenAI models support all standard OpenAI API parameters:Array of message objects with
role and content fields.Controls randomness in responses. Range: 0.0 to 2.0.
Maximum number of tokens to generate in the response.
Enable streaming responses for real-time output.
Structured output format specification (JSON schema).
Usage Examples
Basic Chat Completion
Streaming Response
Structured Output
With Conversation History
Features and Capabilities
Structured Outputs
All OpenAI models in CheckThat support structured outputs using JSON schema. This ensures responses match your specified format exactly. Supported Models:- gpt-5-2025-08-07
- gpt-5-nano-2025-08-07
- o3-2025-04-16
- o4-mini-2025-04-16
Conversation History
Maintain context across multiple turns by including previous messages in your request. The API automatically formats conversation history for optimal model performance.Streaming
Get real-time responses as they’re generated using streaming mode. Perfect for chat applications and long-form content generation.Implementation Details
CheckThat’s OpenAI integration (openai.py:18-110) provides:
- Direct parameter pass-through: Send any OpenAI-compatible parameters
- Response format support: Full JSON schema and structured output support
- Streaming: Real-time response generation with
Stream[ChatCompletionChunk] - Legacy methods: Backward-compatible prompt-based methods
Rate Limits and Pricing
Rate limits and pricing are determined by your OpenAI API key tier. CheckThat does not impose additional rate limits on OpenAI models. Refer to OpenAI’s pricing page for current rates:- GPT-5: Premium tier pricing
- GPT-5 nano: Optimized pricing for high-volume use
- o3/o4-mini: Reasoning model pricing
Error Handling
The OpenAI integration includes comprehensive error handling:- 401: Invalid API key
- 429: Rate limit exceeded
- 500: OpenAI service error
Best Practices
- Use appropriate models: Choose GPT-5 nano for speed, GPT-5 for quality, o-series for reasoning
- Set max_tokens: Prevent runaway costs by limiting response length
- Implement retries: Handle transient failures with exponential backoff
- Stream for UX: Use streaming for better user experience in chat applications
- Cache responses: Reduce API calls by caching common queries