Overview
PicoClaw uses a model-centric configuration approach that allows you to add new providers with zero code changes. Simply specify thevendor/model format (e.g., zhipu/glm-4.7) to configure any OpenAI-compatible provider.
This design enables:
- Multiple model providers in a single config
- Model fallbacks for resilience
- Load balancing across multiple endpoints
- Zero-code provider addition for OpenAI-compatible APIs
Configuration Format
Basic Structure
Field Definitions
model_name (required)
Type:string
User-facing alias for the model. This is what you reference in agents.defaults.model_name.
model (required)
Type:stringFormat:
protocol/model-identifier
The protocol prefix and model identifier:
openai/- OpenAI APIanthropic/- Anthropic Claude APIzhipu/- Zhipu GLMdeepseek/- DeepSeekgemini/- Google Geminigroq/- Groqqwen/- Alibaba Qwenmoonshot/- Moonshot AInvidia/- NVIDIAcerebras/- Cerebrasollama/- Ollama (local)openrouter/- OpenRouterlitellm/- LiteLLM Proxyvllm/- vLLMvolcengine/- Volcengine (Doubao)shengsuanyun/- ShengsuanYunmistral/- Mistral AIgithub-copilot/- GitHub Copilotantigravity/- Google Cloud Code Assist
api_key
Type:string
API authentication key for the provider.
For local models (Ollama, vLLM) or OAuth-based providers (GitHub Copilot, Antigravity),
api_key may not be required.api_base
Type:string
Base URL for the API endpoint. If omitted, uses the provider’s default endpoint.
request_timeout
Type:integer (seconds)Default:
120
Timeout for API requests in seconds.
proxy
Type:string
HTTP proxy URL for the provider.
auth_method
Type:stringValues:
oauth, token
Authentication method for special providers (GitHub Copilot, Antigravity).
connect_mode
Type:stringValues:
stdio, grpc
Connection mode for GitHub Copilot.
Provider Examples
OpenAI
Anthropic Claude
Zhipu AI (智谱)
DeepSeek
Google Gemini
Qwen (通义千问)
Groq
Ollama (Local)
OpenRouter
LiteLLM Proxy
PicoClaw strips only the outer
litellm/ prefix. So litellm/lite-gpt4 sends lite-gpt4, while litellm/openai/gpt-5.2 sends openai/gpt-5.2 to your proxy.Custom OpenAI-Compatible API
Load Balancing
Configure multiple endpoints with the samemodel_name for automatic round-robin load balancing:
Multiple Models
Configure multiple models and select between them:All Supported Providers
| Provider | Prefix | Default API Base | Get API Key |
|---|---|---|---|
| OpenAI | openai/ | https://api.openai.com/v1 | Get Key |
| Anthropic | anthropic/ | https://api.anthropic.com/v1 | Get Key |
| Zhipu (智谱) | zhipu/ | https://open.bigmodel.cn/api/paas/v4 | Get Key |
| DeepSeek | deepseek/ | https://api.deepseek.com/v1 | Get Key |
| Gemini | gemini/ | https://generativelanguage.googleapis.com/v1beta | Get Key |
| Groq | groq/ | https://api.groq.com/openai/v1 | Get Key |
| Qwen (千问) | qwen/ | https://dashscope.aliyuncs.com/compatible-mode/v1 | Get Key |
| Moonshot | moonshot/ | https://api.moonshot.cn/v1 | Get Key |
| NVIDIA | nvidia/ | https://integrate.api.nvidia.com/v1 | Get Key |
| Ollama | ollama/ | http://localhost:11434/v1 | Local (no key) |
| OpenRouter | openrouter/ | https://openrouter.ai/api/v1 | Get Key |
| LiteLLM | litellm/ | http://localhost:4000/v1 | Your proxy key |
| vLLM | vllm/ | http://localhost:8000/v1 | Local |
| Cerebras | cerebras/ | https://api.cerebras.ai/v1 | Get Key |
| Volcengine | volcengine/ | https://ark.cn-beijing.volces.com/api/v3 | Get Key |
| ShengsuanYun | shengsuanyun/ | https://api.shengsuanyun.com/v1 | - |
| Mistral | mistral/ | https://api.mistral.ai/v1 | Get Key |
| GitHub Copilot | github-copilot/ | localhost:4321 | OAuth |
| Antigravity | antigravity/ | Google Cloud | OAuth |
Migration from Legacy Config
If you’re using the oldproviders config format, see Legacy Providers Configuration for migration instructions.