provider/model-name syntax. You can mix providers across agents in a single config.
How model references work
Models are identified using aprovider/model-name format. You can use this inline on an agent or define a named model in the models section for reuse and parameter control.
- Inline reference
- Named model
Supported providers
OpenAI
GPT-4o, GPT-5, GPT-5-mini. The most widely used AI models.
Anthropic
Claude Sonnet 4, Claude Sonnet 4.5. Excellent for coding and analysis.
Google Gemini
Gemini 2.5 Flash, Gemini 3 Pro. Fast and cost-effective.
AWS Bedrock
Access Claude, Nova, Llama, and more through AWS infrastructure.
Docker Model Runner
Run models locally with Docker. No API keys, no costs.
Custom providers
Connect to any OpenAI-compatible API endpoint.
Quick comparison
| Provider | Key | Auth env var | Example model reference | Local? |
|---|---|---|---|---|
| OpenAI | openai | OPENAI_API_KEY | openai/gpt-4o | No |
| Anthropic | anthropic | ANTHROPIC_API_KEY | anthropic/claude-sonnet-4-0 | No |
| Google Gemini | google | GOOGLE_API_KEY | google/gemini-2.5-flash | No |
| AWS Bedrock | amazon-bedrock | AWS credentials | amazon-bedrock/... | No |
| Docker Model Runner | dmr | None | dmr/ai/qwen3 | Yes |
| Mistral | mistral | MISTRAL_API_KEY | mistral/mistral-large-latest | No |
| xAI (Grok) | xai | XAI_API_KEY | xai/grok-3 | No |
| Nebius | nebius | NEBIUS_API_KEY | nebius/deepseek-ai/DeepSeek-V3 | No |
| MiniMax | minimax | MINIMAX_API_KEY | minimax/MiniMax-M2.5 | No |
| Ollama | ollama | None | ollama/llama3.2 | Yes |
| Custom | user-defined | user-defined | my_provider/model | Either |
Custom provider configuration
Use theproviders section to define a reusable provider with a custom base_url, token_key, and api_type. This is useful for self-hosted models, API proxies, and any OpenAI-compatible endpoint.
provider/model syntax: