Overview
The OpenAI provider enables access to OpenAI’s language models including GPT-4, GPT-5, and other OpenAI-compatible models. PicoClaw uses the OpenAI-compatible HTTP protocol for communication.
Configuration
Add OpenAI models to your model_list configuration:
{
"model_list": [
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-your-openai-key",
"api_base": "https://api.openai.com/v1",
"request_timeout": 300
}
],
"agents": {
"defaults": {
"model_name": "gpt4"
}
}
}
Configuration Parameters
| Parameter | Type | Required | Default | Description |
|---|
model_name | string | Yes | - | Alias for this model configuration |
model | string | Yes | - | Model identifier with openai/ prefix |
api_key | string | Yes | - | Your OpenAI API key |
api_base | string | No | https://api.openai.com/v1 | API endpoint URL |
request_timeout | integer | No | 120 | Request timeout in seconds |
Available Models
OpenAI provides several model families:
GPT-5 Series
openai/gpt-5.2 - Latest GPT-5 model
openai/gpt-5 - GPT-5 base model
GPT-4 Series
openai/gpt-4o - GPT-4 Optimized
openai/gpt-4-turbo - GPT-4 Turbo
openai/gpt-4 - GPT-4 base model
Reasoning Models
openai/o1 - Reasoning-focused model
openai/o1-mini - Compact reasoning model
Reasoning models (o1 series) use max_completion_tokens instead of max_tokens. PicoClaw handles this automatically.
Setup Instructions
1. Get API Key
- Visit OpenAI Platform
- Sign in or create an account
- Navigate to API Keys section
- Click Create new secret key
- Copy your API key (starts with
sk-)
Edit ~/.picoclaw/config.json:
{
"model_list": [
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-your-actual-key-here"
}
],
"agents": {
"defaults": {
"model_name": "gpt4",
"max_tokens": 8192,
"temperature": 0.7
}
}
}
3. Test Connection
picoclaw agent -m "Hello, test my OpenAI connection"
Advanced Configuration
Custom API Endpoint
Use a custom OpenAI-compatible endpoint:
{
"model_name": "custom-gpt",
"model": "openai/custom-model",
"api_base": "https://my-proxy.com/v1",
"api_key": "sk-...",
"request_timeout": 300
}
Load Balancing
Configure multiple endpoints for automatic load balancing:
{
"model_list": [
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-key1",
"api_base": "https://api1.example.com/v1"
},
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-key2",
"api_base": "https://api2.example.com/v1"
}
]
}
PicoClaw automatically round-robins between endpoints with the same model_name.
Web Search (GPT Models)
Enable web search capabilities for OpenAI models:
{
"providers": {
"openai": {
"api_key": "sk-...",
"web_search": true
}
}
}
Prompt Caching
OpenAI supports prompt caching to reduce costs and latency. PicoClaw automatically enables this with a stable cache key per agent:
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-..."
}
No additional configuration needed - PicoClaw passes prompt_cache_key automatically.
Authentication Methods
API Key (Recommended)
Standard API key authentication:
{
"model_list": [
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-your-key"
}
]
}
OAuth / Token (Advanced)
For OAuth-based authentication:
{
"providers": {
"openai": {
"auth_method": "oauth"
}
}
}
Then authenticate:
picoclaw auth login --provider openai
Troubleshooting
Rate Limiting
If you encounter rate limits:
- Upgrade your OpenAI plan
- Configure load balancing across multiple API keys
- Add retry logic with backoff
Invalid API Key
Error: 401 Unauthorized
- Verify your API key is correct
- Check key hasn’t been revoked
- Ensure sufficient credits in your account
Timeout Errors
Increase timeout for long-running requests:
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-...",
"request_timeout": 600
}
Model Selection Guide
| Use Case | Recommended Model | Notes |
|---|
| General tasks | gpt-5.2 | Best balance of speed and quality |
| Complex reasoning | o1 | Specialized for step-by-step thinking |
| Fast responses | gpt-4o | Optimized for speed |
| Cost-sensitive | gpt-4-turbo | Good performance, lower cost |
Cost Optimization
- Use appropriate models: Don’t use
o1 for simple tasks
- Set
max_tokens: Limit response length to reduce costs
- Enable caching: Automatically enabled for system prompts
- Monitor usage: Check OpenAI dashboard regularly