Overview
Theopenai() function creates a ModelProvider instance configured to use OpenAI’s chat completion API. It supports all GPT models including GPT-4, GPT-4 Turbo, GPT-4o, and GPT-3.5 models.
Usage
Configuration
Your OpenAI API key. Get one from platform.openai.com/api-keys
The OpenAI model to use. Common options:
gpt-4o- Latest GPT-4 Optimized modelgpt-4o-mini- Smaller, faster GPT-4 variantgpt-4-turbo- GPT-4 Turbo with 128k contextgpt-3.5-turbo- Faster, more economical optiono1- Reasoning model (note: system prompts converted to user messages)o3- Advanced reasoning model
Custom API endpoint URL. Use this for:
- Azure OpenAI deployments
- OpenAI-compatible APIs
- Proxy services
https://your-resource.openai.azure.comOpenAI organization ID for usage tracking and billing isolation
Controls randomness in responses. Range: 0.0 to 2.0
- Lower values (e.g., 0.2) = more focused, deterministic
- Higher values (e.g., 1.5) = more creative, varied
Maximum tokens in the response. Note:
- Total context = prompt + completion tokens
- Model limits vary (e.g., gpt-4o has 128k context)
- Setting too low may truncate responses
Examples
Basic Setup
Custom Model and Temperature
Azure OpenAI
Organization Scoping
Token Limit Control
Special Model Handling
Reasoning Models (o1, o3)
When using OpenAI’s reasoning models (o1-* or o3-*), the provider automatically:
- Converts
systemrole messages touserrole (required by these models) - Extracts reasoning content from responses when available
Tool Calling
The provider automatically handles tool/function calling:Return Value
Returns anOpenAIProvider instance that implements the ModelProvider interface with:
name: 'openai'complete(request)- Send completion requeststream(request)- Stream completion response