Skip to main content

Overview

Nanobot supports multiple LLM providers through a unified configuration system. The provider registry automatically detects the correct provider based on model names, API keys, or explicit configuration.

ProvidersConfig

The root configuration object for all LLM providers.
custom
ProviderConfig
default:"ProviderConfig()"
Custom OpenAI-compatible endpoint (direct, bypasses LiteLLM).
anthropic
ProviderConfig
default:"ProviderConfig()"
Anthropic (Claude) provider configuration.
openai
ProviderConfig
default:"ProviderConfig()"
OpenAI (GPT) provider configuration.
openrouter
ProviderConfig
default:"ProviderConfig()"
OpenRouter gateway configuration.
deepseek
ProviderConfig
default:"ProviderConfig()"
DeepSeek provider configuration.
groq
ProviderConfig
default:"ProviderConfig()"
Groq provider configuration.
zhipu
ProviderConfig
default:"ProviderConfig()"
Zhipu AI (GLM) provider configuration.
dashscope
ProviderConfig
default:"ProviderConfig()"
DashScope (阿里云通义千问) provider configuration.
vllm
ProviderConfig
default:"ProviderConfig()"
vLLM local deployment configuration.
gemini
ProviderConfig
default:"ProviderConfig()"
Google Gemini provider configuration.
moonshot
ProviderConfig
default:"ProviderConfig()"
Moonshot (Kimi) provider configuration.
minimax
ProviderConfig
default:"ProviderConfig()"
MiniMax provider configuration.
aihubmix
ProviderConfig
default:"ProviderConfig()"
AiHubMix gateway configuration.
siliconflow
ProviderConfig
default:"ProviderConfig()"
SiliconFlow (硅基流动) gateway configuration.
volcengine
ProviderConfig
default:"ProviderConfig()"
VolcEngine (火山引擎) gateway configuration.
openaiCodex
ProviderConfig
default:"ProviderConfig()"
OpenAI Codex (OAuth) provider configuration.
githubCopilot
ProviderConfig
default:"ProviderConfig()"
GitHub Copilot (OAuth) provider configuration.

ProviderConfig

Individual provider configuration.
apiKey
str
default:""
API key for the provider. Can be set via environment variables.
apiBase
str | null
default:"null"
Custom API base URL. If not specified, uses provider’s default endpoint.
extraHeaders
dict[str, str] | null
default:"null"
Custom HTTP headers to include in requests (e.g., APP-Code for AiHubMix).

Provider Registry

The provider registry (PROVIDERS) defines metadata for each supported provider:

Provider Types

Standard Providers: Directly supported by LiteLLM
  • Anthropic (Claude)
  • OpenAI (GPT)
  • DeepSeek
  • Gemini
  • And more
Gateways: Route any model through a unified API
  • OpenRouter
  • AiHubMix
  • SiliconFlow
  • VolcEngine
Local Deployments: Self-hosted models
  • vLLM
  • Ollama (via vLLM config)
OAuth Providers: Use OAuth instead of API keys
  • OpenAI Codex
  • GitHub Copilot

ProviderSpec

Each provider is defined by a ProviderSpec with the following attributes:
name
str
Configuration field name (e.g., dashscope, anthropic).
keywords
tuple[str, ...]
Model name keywords for auto-detection (lowercase).
envKey
str
Environment variable name for API key (e.g., ANTHROPIC_API_KEY).
displayName
str
Human-readable name shown in nanobot status.
litellmPrefix
str
Prefix for LiteLLM routing (e.g., deepseek/deepseek/deepseek-chat).
skipPrefixes
tuple[str, ...]
Don’t add prefix if model already starts with these.
envExtras
tuple[tuple[str, str], ...]
Additional environment variables to set. Supports placeholders: {api_key}, {api_base}.
isGateway
bool
If true, provider can route any model (OpenRouter, AiHubMix).
isLocal
bool
If true, provider is a local deployment (vLLM).
detectByKeyPrefix
str
Auto-detect by API key prefix (e.g., sk-or- for OpenRouter).
detectByBaseKeyword
str
Auto-detect by substring in api_base URL.
defaultApiBase
str
Default API base URL if not specified in config.
stripModelPrefix
bool
If true, strip provider prefix before re-prefixing (AiHubMix behavior).
modelOverrides
tuple[tuple[str, dict], ...]
Per-model parameter overrides (e.g., Kimi K2.5 requires temperature: 1.0).
isOauth
bool
If true, uses OAuth instead of API keys (OpenAI Codex, GitHub Copilot).
isDirect
bool
If true, bypasses LiteLLM entirely (custom provider).
supportsPromptCaching
bool
If true, provider supports prompt caching (Anthropic, OpenRouter).

Provider Matching

Nanobot automatically selects the correct provider using this priority:
  1. Explicit provider prefix: github-copilot/claude → GitHub Copilot
  2. Model name keywords: claude-opus → Anthropic
  3. API key prefix: sk-or-xxx → OpenRouter
  4. API base URL: https://openrouter.ai/... → OpenRouter
  5. Forced provider: provider: anthropic in config
  6. Fallback: First available gateway with API key

Configuration Examples

Multiple Providers

providers:
  anthropic:
    apiKey: ${ANTHROPIC_API_KEY}
  
  openai:
    apiKey: ${OPENAI_API_KEY}
  
  openrouter:
    apiKey: ${OPENROUTER_API_KEY}
  
  deepseek:
    apiKey: ${DEEPSEEK_API_KEY}

Custom API Base

providers:
  anthropic:
    apiKey: sk-ant-xxx
    apiBase: https://api.anthropic.com/v1  # Custom endpoint
  
  vllm:
    apiKey: dummy  # Optional for local
    apiBase: http://localhost:8000/v1

Gateway with Custom Headers

providers:
  aihubmix:
    apiKey: ${AIHUBMIX_KEY}
    apiBase: https://aihubmix.com/v1
    extraHeaders:
      APP-Code: your-app-code

OAuth Providers

providers:
  githubCopilot:
    # No apiKey needed - uses OAuth flow
    apiBase: ""  # Auto-detected
  
  openaiCodex:
    # No apiKey needed - uses OAuth flow
    apiBase: https://chatgpt.com/backend-api

Environment Variables

Providers can be configured via environment variables:
# Standard format
NANOBOT_PROVIDERS__ANTHROPIC__API_KEY=sk-ant-xxx
NANOBOT_PROVIDERS__OPENAI__API_KEY=sk-xxx

# With custom base
NANOBOT_PROVIDERS__VLLM__API_BASE=http://localhost:8000/v1

# Direct LiteLLM env vars (also supported)
ANTHROPIC_API_KEY=sk-ant-xxx
OPENAI_API_KEY=sk-xxx
DEEPSEEK_API_KEY=sk-xxx

Provider-Specific Notes

Anthropic

  • Supports prompt caching via cache_control
  • No prefix needed (claude-opus-4-5 works directly)

OpenRouter

  • Auto-detected by sk-or- key prefix
  • Routes any model through unified API
  • Supports prompt caching

DeepSeek

  • Requires deepseek/ prefix for LiteLLM
  • Models: deepseek-chat, deepseek-reasoner

Moonshot (Kimi)

  • Requires moonshot/ prefix
  • Kimi K2.5 enforces temperature >= 1.0
  • Default base: https://api.moonshot.ai/v1 (international)
  • Use https://api.moonshot.cn/v1 for China region

Zhipu AI

  • Uses zai/ prefix for LiteLLM
  • Also sets ZHIPUAI_API_KEY env var for compatibility

AiHubMix

  • Gateway that strips provider prefixes
  • Example: anthropic/claude-3claude-3openai/claude-3
  • Requires APP-Code header for some configurations

vLLM (Local)

  • For self-hosted models
  • Uses hosted_vllm/ prefix
  • Requires api_base configuration

GitHub Copilot

  • OAuth-based (no API key)
  • Uses github_copilot/ prefix
  • Models must be explicitly selected

Programmatic Access

from nanobot.config.schema import Config
from nanobot.providers.registry import find_by_model, find_gateway, PROVIDERS

# Load config
config = Config()

# Get provider for model
provider = config.get_provider("claude-opus-4-5")
print(provider.api_key, provider.api_base)

# Get provider name
name = config.get_provider_name("deepseek-chat")
print(name)  # "deepseek"

# Find provider spec by model
spec = find_by_model("claude-opus-4-5")
print(spec.name, spec.litellm_prefix)

# Find gateway by API key
gateway = find_gateway(api_key="sk-or-xxx")
print(gateway.name)  # "openrouter"

# List all providers
for spec in PROVIDERS:
    print(f"{spec.label}: {spec.keywords}")

Build docs developers (and LLMs) love