Supported Providers
Weaver integrates with the following LLM providers:- OpenAI - GPT-4, GPT-5 models with OAuth and API key authentication
- Anthropic - Claude models with native SDK support
- Google Gemini - Gemini models with API key or GCP ADC authentication
- OpenRouter - Unified access to multiple model providers
- Local Models - Ollama, vLLM, and self-hosted deployments
Configuration Methods
Weaver supports three configuration methods:1. Configuration File
Add provider settings to~/.weaver/config.json:
2. Environment Variables
Set environment variables in.env:
3. CLI Authentication
Use OAuth for supported providers:Provider Selection
Weaver automatically selects the provider based on:- Explicit provider configuration in
config.json - Model name prefix (e.g.,
openai/gpt-4,anthropic/claude-3) - Model family detection (e.g., models containing “gpt”, “claude”, “gemini”)
- Fallback to OpenRouter if configured
Common Configuration Options
All providers support these options:Provider API key for authentication
Custom API endpoint URL (optional)
HTTP/HTTPS proxy URL (optional)
Authentication method:
api_key, oauth, token, or provider-specificModel Parameters
Configure default model behavior in the agents section:Model identifier (e.g.,
gpt-4, claude-sonnet-4-5-20250929, gemini-3-flash-preview)Maximum tokens in model response
Sampling temperature (0.0 to 2.0)
Maximum tool calling iterations per agent execution
Next Steps
OpenAI Setup
Configure GPT models with OAuth or API keys
Anthropic Setup
Set up Claude models with native SDK
Gemini Setup
Use Google’s Gemini models
Local Models
Run models locally with Ollama or vLLM