Skip to main content
Weaver supports multiple LLM providers through a unified interface. Each provider can be configured with API keys, custom endpoints, and specific authentication methods.

Supported Providers

Weaver integrates with the following LLM providers:
  • OpenAI - GPT-4, GPT-5 models with OAuth and API key authentication
  • Anthropic - Claude models with native SDK support
  • Google Gemini - Gemini models with API key or GCP ADC authentication
  • OpenRouter - Unified access to multiple model providers
  • Local Models - Ollama, vLLM, and self-hosted deployments

Configuration Methods

Weaver supports three configuration methods:

1. Configuration File

Add provider settings to ~/.weaver/config.json:
{
  "providers": {
    "openai": {
      "api_key": "sk-...",
      "api_base": "https://api.openai.com/v1"
    },
    "anthropic": {
      "api_key": "sk-ant-..."
    }
  },
  "agents": {
    "defaults": {
      "provider": "openai",
      "model": "gpt-4",
      "max_tokens": 8192,
      "temperature": 0.7
    }
  }
}

2. Environment Variables

Set environment variables in .env:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
OPENROUTER_API_KEY=sk-or-v1-...

3. CLI Authentication

Use OAuth for supported providers:
weaver auth login --provider openai
weaver auth login --provider anthropic

Provider Selection

Weaver automatically selects the provider based on:
  1. Explicit provider configuration in config.json
  2. Model name prefix (e.g., openai/gpt-4, anthropic/claude-3)
  3. Model family detection (e.g., models containing “gpt”, “claude”, “gemini”)
  4. Fallback to OpenRouter if configured

Common Configuration Options

All providers support these options:
api_key
string
Provider API key for authentication
api_base
string
Custom API endpoint URL (optional)
proxy
string
HTTP/HTTPS proxy URL (optional)
auth_method
string
Authentication method: api_key, oauth, token, or provider-specific

Model Parameters

Configure default model behavior in the agents section:
{
  "agents": {
    "defaults": {
      "model": "gpt-4",
      "max_tokens": 8192,
      "temperature": 0.7,
      "max_tool_iterations": 20
    }
  }
}
model
string
Model identifier (e.g., gpt-4, claude-sonnet-4-5-20250929, gemini-3-flash-preview)
max_tokens
integer
default:"8192"
Maximum tokens in model response
temperature
float
default:"0.7"
Sampling temperature (0.0 to 2.0)
max_tool_iterations
integer
default:"20"
Maximum tool calling iterations per agent execution

Next Steps

OpenAI Setup

Configure GPT models with OAuth or API keys

Anthropic Setup

Set up Claude models with native SDK

Gemini Setup

Use Google’s Gemini models

Local Models

Run models locally with Ollama or vLLM

Build docs developers (and LLMs) love