[provider] configuration section.
Supported Providers
| Provider | ID | Default Model | API Key Env Var |
|---|---|---|---|
| Anthropic Claude | anthropic | claude-sonnet-4-20250514 | ANTHROPIC_API_KEY |
| OpenAI GPT | openai | gpt-4o | OPENAI_API_KEY |
| DeepSeek | deepseek | deepseek-chat | DEEPSEEK_API_KEY |
| Groq | groq | llama-3.3-70b-versatile | GROQ_API_KEY |
| Google Gemini | google or gemini | gemini-2.0-flash | GOOGLE_API_KEY |
| Ollama (local) | ollama | llama3.2:1b | (none) |
Basic Configuration
Minimal provider configuration:Provider Selection
Set theprimary field to choose your LLM provider:
Model Configuration
Available Models by Provider
Anthropic Claude:claude-sonnet-4-20250514— Best balance (default)claude-haiku-4-5-20251001— Fast, cheap, good for classificationclaude-opus-4-5-20250918— Max quality, expensive
gpt-4o— Latest GPT-4 (default)gpt-4o-mini— Faster, cheaper
deepseek-chat— General purpose (default)deepseek-reasoner— Advanced reasoning
llama-3.3-70b-versatile— Llama 3.3 70B (default)mixtral-8x7b-32768— Mixtral MoE
gemini-2.0-flash— Fast, efficient (default)
llama3.2:1b— Smallest, fastest (default)llama3.2:3b— Better qualityqwen2.5:3b— Good multilingual support- (Any Ollama-supported model)
Model Parameters
max_tokens: Controls response length (default: 1024)temperature: Controls randomness (default: 0.3)0.0— Deterministic, consistent0.3-0.5— Balanced (recommended for agents)0.7-1.0— Creative, varied
Fallback Chains
OneClaw supports automatic failover to backup providers:How Fallback Works
- Try primary provider (with
max_retriesattempts) - If primary fails, try fallback[0] (with
max_retries) - If fallback[0] fails, try fallback[1], etc.
- If all providers fail, return error
Example: Cloud with Local Fallback
http://localhost:11434 (no API key needed).
Example: Multi-Cloud Fallback
API Key Configuration
Option 1: Per-Provider Keys (Recommended)
Use the[provider.keys] table for multiple providers:
Option 2: Global API Key
Useapi_key for single-provider setups:
Option 3: Environment Variables
Set provider-specific env vars:Key Resolution Priority
[provider.keys].<provider>in TOMLapi_keyfield in[provider]sectionONECLAW_API_KEYenvironment variable- Provider-specific env var (e.g.,
ANTHROPIC_API_KEY)
Ollama Configuration
Ollama runs locally and requires no API key:Custom Ollama Endpoint
For remote Ollama servers:ollama_model is separate from model to allow different models for primary vs. fallback.
Example Configurations
Edge Device (Raspberry Pi)
Development / Testing
Production Multi-Cloud
Checking Provider Status
Use theproviders command to see active provider configuration: