Skip to main content

Overview

Adding a new LLM provider to nanobot is extremely simple thanks to the Provider Registry pattern. You only need to modify 2 files.

Provider Registry Pattern

nanobot uses a centralized registry (nanobot/providers/registry.py) as the single source of truth for all provider metadata. This eliminates the need for scattered if-elif chains throughout the codebase.

Two-Step Process

Adding a provider takes just 2 steps:

Step 1: Add a ProviderSpec to the Registry

Edit nanobot/providers/registry.py and add a new ProviderSpec entry to the PROVIDERS tuple.

Example: Adding DeepSeek

ProviderSpec(
    name="deepseek",                   # Config field name
    keywords=("deepseek",),            # Model name keywords for auto-matching
    env_key="DEEPSEEK_API_KEY",        # Environment variable for LiteLLM
    display_name="DeepSeek",           # Shown in `nanobot status`
    litellm_prefix="deepseek",         # Auto-prefix: deepseek-chat → deepseek/deepseek-chat
    skip_prefixes=("deepseek/",),      # Don't double-prefix if already prefixed
    env_extras=(),                     # Additional env vars (optional)
    is_gateway=False,                  # Not a gateway (can't route any model)
    is_local=False,                    # Not a local deployment
    detect_by_key_prefix="",           # API key prefix detection (for gateways)
    detect_by_base_keyword="",         # API base URL keyword detection (for gateways)
    default_api_base="",               # Default API base URL (if needed)
    strip_model_prefix=False,          # Strip existing prefix before re-prefixing (for gateways)
    model_overrides=(),                # Per-model parameter overrides (optional)
),

Step 2: Add a Config Field

Edit nanobot/config/schema.py and add a field to the ProvidersConfig class:
class ProvidersConfig(BaseModel):
    custom: ProviderConfig = ProviderConfig()
    openrouter: ProviderConfig = ProviderConfig()
    anthropic: ProviderConfig = ProviderConfig()
    openai: ProviderConfig = ProviderConfig()
    deepseek: ProviderConfig = ProviderConfig()  # Add this line
    # ... other providers
That’s it! The provider is now fully integrated.

What Happens Automatically

Once you add these two entries, the following works automatically:
  1. Environment variable setting: API key exported as DEEPSEEK_API_KEY
  2. Model prefixing: Model name deepseek-chat becomes deepseek/deepseek-chat for LiteLLM
  3. Config matching: User can configure it in config.json
  4. Status display: Shows up in nanobot status as “DeepSeek”
  5. Auto-detection: Recognizes models with “deepseek” in the name

ProviderSpec Fields Explained

Required Fields

FieldDescriptionExample
nameConfig field name (lowercase, snake_case)"deepseek"
keywordsTuple of keywords for model name matching("deepseek",)
env_keyEnvironment variable name for LiteLLM"DEEPSEEK_API_KEY"

Display

FieldDescriptionExample
display_nameHuman-readable name for nanobot status"DeepSeek"

Model Prefixing

FieldDescriptionExample
litellm_prefixPrefix to add for LiteLLM routing"deepseek"
skip_prefixesDon’t prefix if model starts with these("deepseek/", "openrouter/")
How it works:
  • Model deepseek-chatdeepseek/deepseek-chat (prefixed)
  • Model deepseek/deepseek-chat → unchanged (skip)
  • Model openrouter/deepseek-chat → unchanged (skip)

Extra Environment Variables

FieldDescriptionExample
env_extrasAdditional env vars to set(("ZHIPUAI_API_KEY", "{api_key}"),)
Placeholders:
  • {api_key} - User’s API key from config
  • {api_base} - API base URL from config or default_api_base
Example: Zhipu needs both ZAI_API_KEY and ZHIPUAI_API_KEY:
ProviderSpec(
    name="zhipu",
    env_key="ZAI_API_KEY",
    env_extras=(
        ("ZHIPUAI_API_KEY", "{api_key}"),
    ),
    # ...
)

Gateway Detection

FieldDescriptionExample
is_gatewayCan route any model (like OpenRouter)True
detect_by_key_prefixDetect by API key prefix"sk-or-"
detect_by_base_keywordDetect by substring in API base URL"openrouter"
default_api_baseDefault API endpoint"https://openrouter.ai/api/v1"
strip_model_prefixStrip existing prefix before re-prefixingTrue (for AiHubMix)
Gateway Example: OpenRouter
ProviderSpec(
    name="openrouter",
    keywords=("openrouter",),
    env_key="OPENROUTER_API_KEY",
    litellm_prefix="openrouter",
    is_gateway=True,
    detect_by_key_prefix="sk-or-",     # Auto-detect keys starting with "sk-or-"
    detect_by_base_keyword="openrouter", # Auto-detect URLs containing "openrouter"
    default_api_base="https://openrouter.ai/api/v1",
    # ...
)
Gateway with prefix stripping: AiHubMix
ProviderSpec(
    name="aihubmix",
    litellm_prefix="openai",
    is_gateway=True,
    detect_by_base_keyword="aihubmix",
    strip_model_prefix=True,  # "anthropic/claude-3" → "claude-3" → "openai/claude-3"
    # ...
)

Local Deployment

FieldDescriptionExample
is_localLocal server (vLLM, Ollama)True
Local Example: vLLM
ProviderSpec(
    name="vllm",
    keywords=("vllm",),
    env_key="HOSTED_VLLM_API_KEY",
    litellm_prefix="hosted_vllm",
    is_local=True,
    # ...
)

Per-Model Overrides

FieldDescriptionExample
model_overridesOverride parameters for specific models(("kimi-k2.5", {"temperature": 1.0}),)
Example: Kimi K2.5 requires temperature >= 1.0
ProviderSpec(
    name="moonshot",
    model_overrides=(
        ("kimi-k2.5", {"temperature": 1.0}),
    ),
    # ...
)

OAuth Providers

FieldDescriptionExample
is_oauthUses OAuth instead of API keyTrue
OAuth Example: OpenAI Codex
ProviderSpec(
    name="openai_codex",
    keywords=("openai-codex",),
    env_key="",  # No API key
    is_oauth=True,
    detect_by_base_keyword="codex",
    default_api_base="https://chatgpt.com/backend-api",
    # ...
)

Direct Providers

FieldDescriptionExample
is_directBypasses LiteLLM entirelyTrue
Direct Example: Custom provider
ProviderSpec(
    name="custom",
    keywords=(),
    env_key="",
    is_direct=True,  # Uses CustomProvider class directly
    # ...
)

Prompt Caching

FieldDescriptionExample
supports_prompt_cachingProvider supports cache_controlTrue
Example: Anthropic
ProviderSpec(
    name="anthropic",
    supports_prompt_caching=True,
    # ...
)

Provider Types

Standard Provider

Most providers are “standard” — they’re matched by model name keywords.
ProviderSpec(
    name="deepseek",
    keywords=("deepseek",),
    env_key="DEEPSEEK_API_KEY",
    display_name="DeepSeek",
    litellm_prefix="deepseek",
    skip_prefixes=("deepseek/",),
),

Gateway Provider

Gateways can route any model and are auto-detected by API key/base URL.
ProviderSpec(
    name="openrouter",
    keywords=("openrouter",),
    env_key="OPENROUTER_API_KEY",
    litellm_prefix="openrouter",
    is_gateway=True,
    detect_by_key_prefix="sk-or-",
    detect_by_base_keyword="openrouter",
    default_api_base="https://openrouter.ai/api/v1",
),

Local Provider

Local deployments (vLLM, Ollama) are detected by config key.
ProviderSpec(
    name="vllm",
    keywords=("vllm",),
    env_key="HOSTED_VLLM_API_KEY",
    litellm_prefix="hosted_vllm",
    is_local=True,
),

OAuth Provider

OAuth providers use token-based auth instead of API keys.
ProviderSpec(
    name="openai_codex",
    keywords=("openai-codex",),
    env_key="",
    is_oauth=True,
    detect_by_base_keyword="codex",
),

User Configuration

After adding a provider, users configure it in ~/.nanobot/config.json:
{
  "providers": {
    "deepseek": {
      "apiKey": "sk-xxx"
    }
  },
  "agents": {
    "defaults": {
      "model": "deepseek-chat",
      "provider": "deepseek"
    }
  }
}

Real-World Examples

Example 1: Adding MiniMax

Step 1: Add to registry.py
ProviderSpec(
    name="minimax",
    keywords=("minimax",),
    env_key="MINIMAX_API_KEY",
    display_name="MiniMax",
    litellm_prefix="minimax",
    skip_prefixes=("minimax/", "openrouter/"),
    env_extras=(),
    is_gateway=False,
    is_local=False,
    detect_by_key_prefix="",
    detect_by_base_keyword="",
    default_api_base="https://api.minimax.io/v1",
    strip_model_prefix=False,
    model_overrides=(),
),
Step 2: Add to schema.py
class ProvidersConfig(BaseModel):
    # ...
    minimax: ProviderConfig = ProviderConfig()
Done!

Example 2: Adding Gemini

Step 1: Add to registry.py
ProviderSpec(
    name="gemini",
    keywords=("gemini",),
    env_key="GEMINI_API_KEY",
    display_name="Gemini",
    litellm_prefix="gemini",
    skip_prefixes=("gemini/",),
),
Step 2: Add to schema.py
class ProvidersConfig(BaseModel):
    # ...
    gemini: ProviderConfig = ProviderConfig()
Done!

Example 3: Adding a Gateway (SiliconFlow)

Step 1: Add to registry.py
ProviderSpec(
    name="siliconflow",
    keywords=("siliconflow",),
    env_key="OPENAI_API_KEY",  # OpenAI-compatible
    display_name="SiliconFlow",
    litellm_prefix="openai",
    is_gateway=True,
    detect_by_base_keyword="siliconflow",
    default_api_base="https://api.siliconflow.cn/v1",
),
Step 2: Add to schema.py
class ProvidersConfig(BaseModel):
    # ...
    siliconflow: ProviderConfig = ProviderConfig()
Done!

Testing Your Provider

1. Configure the provider

Edit ~/.nanobot/config.json:
{
  "providers": {
    "yourprovider": {
      "apiKey": "your-api-key"
    }
  },
  "agents": {
    "defaults": {
      "model": "your-model-name",
      "provider": "yourprovider"
    }
  }
}

2. Test with CLI

nanobot agent -m "Hello, test message"

3. Check status

nanobot status
You should see your provider listed with a checkmark if configured.

4. Test with different models

nanobot agent -m "Test" --model "your-other-model"

Common Patterns

Provider with Custom API Base

If your provider uses a non-standard endpoint:
ProviderSpec(
    name="yourprovider",
    default_api_base="https://api.yourprovider.com/v1",
    # ...
)

Provider with Multiple Environment Variables

If LiteLLM checks multiple env var names:
ProviderSpec(
    name="yourprovider",
    env_key="YOURPROVIDER_API_KEY",
    env_extras=(
        ("YOURPROVIDER_KEY", "{api_key}"),  # Alias
    ),
    # ...
)

Provider with Model-Specific Behavior

If certain models need parameter adjustments:
ProviderSpec(
    name="yourprovider",
    model_overrides=(
        ("special-model", {"temperature": 1.0}),
        ("another-model", {"max_tokens": 8192}),
    ),
    # ...
)

Provider Without Prefix

If LiteLLM recognizes the model natively (like gpt-4):
ProviderSpec(
    name="openai",
    keywords=("openai", "gpt"),
    litellm_prefix="",  # No prefix needed
    skip_prefixes=(),
    # ...
)

Order Matters

The order of providers in the PROVIDERS tuple determines match priority and fallback behavior. Current order:
  1. Gateways (OpenRouter, AiHubMix, etc.) - highest priority for fallback
  2. Standard providers (Anthropic, OpenAI, DeepSeek, etc.)
  3. Auxiliary providers (Groq) - lowest priority
Why? Gateways can route any model, so they should be tried first when auto-detecting.

Template for New Providers

Copy this template when adding a new provider:
# In nanobot/providers/registry.py
ProviderSpec(
    name="yourprovider",
    keywords=("yourprovider", "keyword2"),
    env_key="YOURPROVIDER_API_KEY",
    display_name="Your Provider",
    litellm_prefix="yourprovider",
    skip_prefixes=("yourprovider/",),
    env_extras=(),
    is_gateway=False,
    is_local=False,
    detect_by_key_prefix="",
    detect_by_base_keyword="",
    default_api_base="",
    strip_model_prefix=False,
    model_overrides=(),
),

# In nanobot/config/schema.py
class ProvidersConfig(BaseModel):
    # ...
    yourprovider: ProviderConfig = ProviderConfig()

Troubleshooting

Provider not detected

  • Check that name matches the config field exactly
  • Verify keywords are lowercase
  • Check model name contains one of the keywords

API key not passed to LiteLLM

  • Verify env_key is correct (check LiteLLM docs)
  • Check env_extras if provider needs multiple vars

Model name not recognized

  • Check litellm_prefix is correct
  • Verify skip_prefixes doesn’t block the prefix
  • Try specifying provider explicitly in config

Wrong provider selected

  • Check provider order in PROVIDERS tuple
  • More specific keywords should come before generic ones
  • Gateways should be first for fallback

Contributing

When submitting a PR to add a new provider:
  1. Add the ProviderSpec in the correct section (gateway/standard/local)
  2. Add the config field to ProvidersConfig
  3. Test with both CLI and gateway modes
  4. Include example config in PR description
  5. Mention any special requirements (API base, model overrides, etc.)
See the Contributing Guide for more details.

Build docs developers (and LLMs) love