Overview
The legacyproviders configuration was PicoClaw’s original way to configure LLM providers. It’s still supported for backward compatibility, but new configurations should use model_list.
Legacy Configuration Format
Supported Legacy Providers
anthropic- Anthropic Claudeopenai- OpenAI GPTlitellm- LiteLLM Proxyopenrouter- OpenRoutergroq- Groqzhipu- Zhipu AI (智谱)vllm- vLLMgemini- Google Gemininvidia- NVIDIAollama- Ollama (local)moonshot- Moonshot AIshengsuanyun- ShengsuanYundeepseek- DeepSeekcerebras- Cerebrasvolcengine- Volcengine (火山引擎)github_copilot- GitHub Copilotantigravity- Google Cloud Code Assistqwen- Alibaba Qwenmistral- Mistral AI
Migration to model_list
Example 1: Single Provider
Old format:Example 2: Multiple Providers
Old format:Automatic Migration
PicoClaw automatically migrates legacy configurations at runtime:- If
model_listis empty andprovidershas configuration - PicoClaw converts
providerstomodel_listinternally - Your config file is not modified
- You can manually update to the new format at any time
Environment Variables (Legacy)
Legacy provider environment variables still work:Provider-Specific Options
Request Timeout
Set custom timeout for provider requests (seconds):Proxy Configuration
Configure HTTP proxy for provider:OpenAI Web Search
Enable web search augmentation for OpenAI:GitHub Copilot Connect Mode
stdio or grpc
Why Migrate?
The newmodel_list format provides:
- Multiple models per provider - Configure GPT-5.2 and GPT-4o separately
- Load balancing - Multiple endpoints for the same model
- Better fallbacks - Explicit model fallback chains
- Zero-code providers - Add OpenAI-compatible APIs without code changes
- Cleaner configuration - Model-centric instead of provider-centric