Skip to main content

Overview

PicoClaw uses a model-centric configuration approach that allows you to add new providers with zero code changes. Simply specify the vendor/model format (e.g., zhipu/glm-4.7) to configure any OpenAI-compatible provider. This design enables:
  • Multiple model providers in a single config
  • Model fallbacks for resilience
  • Load balancing across multiple endpoints
  • Zero-code provider addition for OpenAI-compatible APIs

Configuration Format

Basic Structure

{
  "model_list": [
    {
      "model_name": "gpt4",
      "model": "openai/gpt-5.2",
      "api_key": "sk-your-key",
      "api_base": "https://api.openai.com/v1",
      "request_timeout": 300
    }
  ],
  "agents": {
    "defaults": {
      "model_name": "gpt4"
    }
  }
}

Field Definitions

model_name (required)

Type: string User-facing alias for the model. This is what you reference in agents.defaults.model_name.
{
  "model_name": "my-gpt4"
}

model (required)

Type: string
Format: protocol/model-identifier
The protocol prefix and model identifier:
{
  "model": "openai/gpt-5.2"
}
Supported protocols:
  • openai/ - OpenAI API
  • anthropic/ - Anthropic Claude API
  • zhipu/ - Zhipu GLM
  • deepseek/ - DeepSeek
  • gemini/ - Google Gemini
  • groq/ - Groq
  • qwen/ - Alibaba Qwen
  • moonshot/ - Moonshot AI
  • nvidia/ - NVIDIA
  • cerebras/ - Cerebras
  • ollama/ - Ollama (local)
  • openrouter/ - OpenRouter
  • litellm/ - LiteLLM Proxy
  • vllm/ - vLLM
  • volcengine/ - Volcengine (Doubao)
  • shengsuanyun/ - ShengsuanYun
  • mistral/ - Mistral AI
  • github-copilot/ - GitHub Copilot
  • antigravity/ - Google Cloud Code Assist

api_key

Type: string API authentication key for the provider.
{
  "api_key": "sk-proj-..."
}
For local models (Ollama, vLLM) or OAuth-based providers (GitHub Copilot, Antigravity), api_key may not be required.

api_base

Type: string Base URL for the API endpoint. If omitted, uses the provider’s default endpoint.
{
  "api_base": "https://api.openai.com/v1"
}

request_timeout

Type: integer (seconds)
Default: 120
Timeout for API requests in seconds.
{
  "request_timeout": 300
}

proxy

Type: string HTTP proxy URL for the provider.
{
  "proxy": "http://proxy.example.com:8080"
}

auth_method

Type: string
Values: oauth, token
Authentication method for special providers (GitHub Copilot, Antigravity).
{
  "auth_method": "oauth"
}

connect_mode

Type: string
Values: stdio, grpc
Connection mode for GitHub Copilot.
{
  "connect_mode": "grpc"
}

Provider Examples

OpenAI

{
  "model_name": "gpt-5.2",
  "model": "openai/gpt-5.2",
  "api_key": "sk-proj-...",
  "api_base": "https://api.openai.com/v1"
}

Anthropic Claude

{
  "model_name": "claude-sonnet-4.6",
  "model": "anthropic/claude-sonnet-4.6",
  "api_key": "sk-ant-...",
  "api_base": "https://api.anthropic.com/v1"
}

Zhipu AI (智谱)

{
  "model_name": "glm-4.7",
  "model": "zhipu/glm-4.7",
  "api_key": "your-zhipu-key",
  "api_base": "https://open.bigmodel.cn/api/paas/v4"
}
Get API Key

DeepSeek

{
  "model_name": "deepseek-chat",
  "model": "deepseek/deepseek-chat",
  "api_key": "sk-...",
  "api_base": "https://api.deepseek.com/v1"
}
Get API Key

Google Gemini

{
  "model_name": "gemini-2.0-flash",
  "model": "gemini/gemini-2.0-flash-exp",
  "api_key": "AIza...",
  "api_base": "https://generativelanguage.googleapis.com/v1beta"
}
Get API Key

Qwen (通义千问)

{
  "model_name": "qwen-plus",
  "model": "qwen/qwen-plus",
  "api_key": "sk-...",
  "api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
Get API Key

Groq

{
  "model_name": "llama-3.3-70b",
  "model": "groq/llama-3.3-70b-versatile",
  "api_key": "gsk_...",
  "api_base": "https://api.groq.com/openai/v1"
}
Get API Key

Ollama (Local)

{
  "model_name": "llama3",
  "model": "ollama/llama3",
  "api_base": "http://localhost:11434/v1",
  "api_key": "ollama"
}
No API key needed for local Ollama.

OpenRouter

{
  "model_name": "openrouter-auto",
  "model": "openrouter/auto",
  "api_key": "sk-or-v1-...",
  "api_base": "https://openrouter.ai/api/v1"
}
Get API Key

LiteLLM Proxy

{
  "model_name": "lite-gpt4",
  "model": "litellm/lite-gpt4",
  "api_key": "sk-...",
  "api_base": "http://localhost:4000/v1"
}
PicoClaw strips only the outer litellm/ prefix. So litellm/lite-gpt4 sends lite-gpt4, while litellm/openai/gpt-5.2 sends openai/gpt-5.2 to your proxy.

Custom OpenAI-Compatible API

{
  "model_name": "my-custom-model",
  "model": "openai/custom-model-v1",
  "api_key": "custom-key",
  "api_base": "https://my-api.example.com/v1",
  "request_timeout": 300
}

Load Balancing

Configure multiple endpoints with the same model_name for automatic round-robin load balancing:
{
  "model_list": [
    {
      "model_name": "gpt-5.2",
      "model": "openai/gpt-5.2",
      "api_base": "https://api1.example.com/v1",
      "api_key": "sk-key1"
    },
    {
      "model_name": "gpt-5.2",
      "model": "openai/gpt-5.2",
      "api_base": "https://api2.example.com/v1",
      "api_key": "sk-key2"
    }
  ]
}
PicoClaw automatically distributes requests across both endpoints.

Multiple Models

Configure multiple models and select between them:
{
  "model_list": [
    {
      "model_name": "gpt-5.2",
      "model": "openai/gpt-5.2",
      "api_key": "sk-openai-key"
    },
    {
      "model_name": "claude-sonnet-4.6",
      "model": "anthropic/claude-sonnet-4.6",
      "api_key": "sk-ant-key"
    },
    {
      "model_name": "gemini-flash",
      "model": "gemini/gemini-2.0-flash-exp",
      "api_key": "AIza-key"
    }
  ],
  "agents": {
    "defaults": {
      "model_name": "claude-sonnet-4.6",
      "model_fallbacks": ["gpt-5.2", "gemini-flash"]
    }
  }
}

All Supported Providers

ProviderPrefixDefault API BaseGet API Key
OpenAIopenai/https://api.openai.com/v1Get Key
Anthropicanthropic/https://api.anthropic.com/v1Get Key
Zhipu (智谱)zhipu/https://open.bigmodel.cn/api/paas/v4Get Key
DeepSeekdeepseek/https://api.deepseek.com/v1Get Key
Geminigemini/https://generativelanguage.googleapis.com/v1betaGet Key
Groqgroq/https://api.groq.com/openai/v1Get Key
Qwen (千问)qwen/https://dashscope.aliyuncs.com/compatible-mode/v1Get Key
Moonshotmoonshot/https://api.moonshot.cn/v1Get Key
NVIDIAnvidia/https://integrate.api.nvidia.com/v1Get Key
Ollamaollama/http://localhost:11434/v1Local (no key)
OpenRouteropenrouter/https://openrouter.ai/api/v1Get Key
LiteLLMlitellm/http://localhost:4000/v1Your proxy key
vLLMvllm/http://localhost:8000/v1Local
Cerebrascerebras/https://api.cerebras.ai/v1Get Key
Volcenginevolcengine/https://ark.cn-beijing.volces.com/api/v3Get Key
ShengsuanYunshengsuanyun/https://api.shengsuanyun.com/v1-
Mistralmistral/https://api.mistral.ai/v1Get Key
GitHub Copilotgithub-copilot/localhost:4321OAuth
Antigravityantigravity/Google CloudOAuth

Migration from Legacy Config

If you’re using the old providers config format, see Legacy Providers Configuration for migration instructions.

Next Steps

Build docs developers (and LLMs) love