Skip to main content
Select a provider, set an API key (if required), and reference models with provider/model-name syntax. You can mix providers across agents in a single config.

How model references work

Models are identified using a provider/model-name format. You can use this inline on an agent or define a named model in the models section for reuse and parameter control.
agents:
  root:
    model: openai/gpt-4o
See Model configuration for the full schema.

Supported providers

OpenAI

GPT-4o, GPT-5, GPT-5-mini. The most widely used AI models.

Anthropic

Claude Sonnet 4, Claude Sonnet 4.5. Excellent for coding and analysis.

Google Gemini

Gemini 2.5 Flash, Gemini 3 Pro. Fast and cost-effective.

AWS Bedrock

Access Claude, Nova, Llama, and more through AWS infrastructure.

Docker Model Runner

Run models locally with Docker. No API keys, no costs.

Custom providers

Connect to any OpenAI-compatible API endpoint.

Quick comparison

ProviderKeyAuth env varExample model referenceLocal?
OpenAIopenaiOPENAI_API_KEYopenai/gpt-4oNo
AnthropicanthropicANTHROPIC_API_KEYanthropic/claude-sonnet-4-0No
Google GeminigoogleGOOGLE_API_KEYgoogle/gemini-2.5-flashNo
AWS Bedrockamazon-bedrockAWS credentialsamazon-bedrock/...No
Docker Model RunnerdmrNonedmr/ai/qwen3Yes
MistralmistralMISTRAL_API_KEYmistral/mistral-large-latestNo
xAI (Grok)xaiXAI_API_KEYxai/grok-3No
NebiusnebiusNEBIUS_API_KEYnebius/deepseek-ai/DeepSeek-V3No
MiniMaxminimaxMINIMAX_API_KEYminimax/MiniMax-M2.5No
OllamaollamaNoneollama/llama3.2Yes
Customuser-defineduser-definedmy_provider/modelEither

Custom provider configuration

Use the providers section to define a reusable provider with a custom base_url, token_key, and api_type. This is useful for self-hosted models, API proxies, and any OpenAI-compatible endpoint.
providers:
  my_gateway:
    api_type: openai_chatcompletions  # openai_chatcompletions | openai_responses
    base_url: https://api.example.com/v1
    token_key: MY_API_KEY             # name of the env var holding the token

models:
  my_model:
    provider: my_gateway
    model: gpt-4o
    max_tokens: 32768

agents:
  root:
    model: my_model
Once defined, you can also use shorthand provider/model syntax:
agents:
  root:
    model: my_gateway/gpt-4o-mini
See Custom providers for full details.

Using multiple providers

Different agents in the same configuration can use different providers. This lets you assign expensive models to complex tasks and cheaper or local models to routine work.
models:
  claude:
    provider: anthropic
    model: claude-sonnet-4-0
    max_tokens: 64000
  gpt:
    provider: openai
    model: gpt-4o
  local:
    provider: dmr
    model: ai/qwen3

agents:
  root:
    model: claude       # coordinator uses Claude
    sub_agents: [coder, helper]
  coder:
    model: gpt          # coder uses GPT-4o
  helper:
    model: local        # helper runs locally for free
Use expensive models for complex reasoning and cheaper or local models for routine subtasks. See Multi-agent for orchestration patterns.

Switching providers without changing agent logic

Because agents reference models by name — not by hardcoded provider strings — you can swap the underlying provider without touching the agent’s instructions, tools, or structure. Just update the model definition.
models:
  main:
    provider: anthropic     # change this to openai, google, etc.
    model: claude-sonnet-4-0

agents:
  root:
    model: main             # agent logic unchanged

Build docs developers (and LLMs) love