Overview
Adding a new LLM provider to nanobot is extremely simple thanks to the Provider Registry pattern. You only need to modify 2 files.Provider Registry Pattern
nanobot uses a centralized registry (nanobot/providers/registry.py) as the single source of truth for all provider metadata. This eliminates the need for scattered if-elif chains throughout the codebase.
Two-Step Process
Adding a provider takes just 2 steps:Step 1: Add a ProviderSpec to the Registry
Edit nanobot/providers/registry.py and add a new ProviderSpec entry to the PROVIDERS tuple.
Example: Adding DeepSeek
Step 2: Add a Config Field
Editnanobot/config/schema.py and add a field to the ProvidersConfig class:
What Happens Automatically
Once you add these two entries, the following works automatically:- Environment variable setting: API key exported as
DEEPSEEK_API_KEY - Model prefixing: Model name
deepseek-chatbecomesdeepseek/deepseek-chatfor LiteLLM - Config matching: User can configure it in
config.json - Status display: Shows up in
nanobot statusas “DeepSeek” - Auto-detection: Recognizes models with “deepseek” in the name
ProviderSpec Fields Explained
Required Fields
| Field | Description | Example |
|---|---|---|
name | Config field name (lowercase, snake_case) | "deepseek" |
keywords | Tuple of keywords for model name matching | ("deepseek",) |
env_key | Environment variable name for LiteLLM | "DEEPSEEK_API_KEY" |
Display
| Field | Description | Example |
|---|---|---|
display_name | Human-readable name for nanobot status | "DeepSeek" |
Model Prefixing
| Field | Description | Example |
|---|---|---|
litellm_prefix | Prefix to add for LiteLLM routing | "deepseek" |
skip_prefixes | Don’t prefix if model starts with these | ("deepseek/", "openrouter/") |
- Model
deepseek-chat→deepseek/deepseek-chat(prefixed) - Model
deepseek/deepseek-chat→ unchanged (skip) - Model
openrouter/deepseek-chat→ unchanged (skip)
Extra Environment Variables
| Field | Description | Example |
|---|---|---|
env_extras | Additional env vars to set | (("ZHIPUAI_API_KEY", "{api_key}"),) |
{api_key}- User’s API key from config{api_base}- API base URL from config ordefault_api_base
ZAI_API_KEY and ZHIPUAI_API_KEY:
Gateway Detection
| Field | Description | Example |
|---|---|---|
is_gateway | Can route any model (like OpenRouter) | True |
detect_by_key_prefix | Detect by API key prefix | "sk-or-" |
detect_by_base_keyword | Detect by substring in API base URL | "openrouter" |
default_api_base | Default API endpoint | "https://openrouter.ai/api/v1" |
strip_model_prefix | Strip existing prefix before re-prefixing | True (for AiHubMix) |
Local Deployment
| Field | Description | Example |
|---|---|---|
is_local | Local server (vLLM, Ollama) | True |
Per-Model Overrides
| Field | Description | Example |
|---|---|---|
model_overrides | Override parameters for specific models | (("kimi-k2.5", {"temperature": 1.0}),) |
temperature >= 1.0
OAuth Providers
| Field | Description | Example |
|---|---|---|
is_oauth | Uses OAuth instead of API key | True |
Direct Providers
| Field | Description | Example |
|---|---|---|
is_direct | Bypasses LiteLLM entirely | True |
Prompt Caching
| Field | Description | Example |
|---|---|---|
supports_prompt_caching | Provider supports cache_control | True |
Provider Types
Standard Provider
Most providers are “standard” — they’re matched by model name keywords.Gateway Provider
Gateways can route any model and are auto-detected by API key/base URL.Local Provider
Local deployments (vLLM, Ollama) are detected by config key.OAuth Provider
OAuth providers use token-based auth instead of API keys.User Configuration
After adding a provider, users configure it in~/.nanobot/config.json:
Real-World Examples
Example 1: Adding MiniMax
Step 1: Add toregistry.py
schema.py
Example 2: Adding Gemini
Step 1: Add toregistry.py
schema.py
Example 3: Adding a Gateway (SiliconFlow)
Step 1: Add toregistry.py
schema.py
Testing Your Provider
1. Configure the provider
Edit~/.nanobot/config.json:
2. Test with CLI
3. Check status
4. Test with different models
Common Patterns
Provider with Custom API Base
If your provider uses a non-standard endpoint:Provider with Multiple Environment Variables
If LiteLLM checks multiple env var names:Provider with Model-Specific Behavior
If certain models need parameter adjustments:Provider Without Prefix
If LiteLLM recognizes the model natively (likegpt-4):
Order Matters
The order of providers in thePROVIDERS tuple determines match priority and fallback behavior.
Current order:
- Gateways (OpenRouter, AiHubMix, etc.) - highest priority for fallback
- Standard providers (Anthropic, OpenAI, DeepSeek, etc.)
- Auxiliary providers (Groq) - lowest priority
Template for New Providers
Copy this template when adding a new provider:Troubleshooting
Provider not detected
- Check that
namematches the config field exactly - Verify keywords are lowercase
- Check model name contains one of the keywords
API key not passed to LiteLLM
- Verify
env_keyis correct (check LiteLLM docs) - Check
env_extrasif provider needs multiple vars
Model name not recognized
- Check
litellm_prefixis correct - Verify
skip_prefixesdoesn’t block the prefix - Try specifying provider explicitly in config
Wrong provider selected
- Check provider order in
PROVIDERStuple - More specific keywords should come before generic ones
- Gateways should be first for fallback
Contributing
When submitting a PR to add a new provider:- Add the
ProviderSpecin the correct section (gateway/standard/local) - Add the config field to
ProvidersConfig - Test with both CLI and gateway modes
- Include example config in PR description
- Mention any special requirements (API base, model overrides, etc.)