Skip to main content
Providers are LLM services that power the nanobot agent. Each provider requires an API key and has specific configuration options.

Provider Auto-Detection

Nanobot automatically detects the correct provider based on your model name:
{
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5",
      "provider": "auto"
    }
  }
}
The provider field can be:
  • "auto" — Auto-detect from model name (recommended)
  • Provider name — Force a specific provider (e.g. "openrouter", "anthropic")

Provider Configuration

Each provider has three common fields:
providers.{name}.apiKey
string
required
API key for authentication. Required for all providers except OAuth-based ones.
providers.{name}.apiBase
string
Custom API base URL. Overrides the default endpoint.
providers.{name}.extraHeaders
object
Custom HTTP headers to include in API requests (e.g. for gateway authentication).

Available Providers

Gateway Providers

Gateway providers can route any model and provide access to multiple LLM backends.
Global LLM gateway with access to all major models.Configuration:
{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-xxx"
    }
  }
}
Get API Key: openrouter.ai/keysDefault API Base: https://openrouter.ai/api/v1Model Prefix: openrouter/ (auto-added)Features:
  • Access to 200+ models
  • Prompt caching support
  • Auto-detected by sk-or- key prefix
OpenAI-compatible API gateway.Configuration:
{
  "providers": {
    "aihubmix": {
      "apiKey": "your-key",
      "extraHeaders": {
        "APP-Code": "your-app-code"
      }
    }
  }
}
Get API Key: aihubmix.comDefault API Base: https://aihubmix.com/v1Note: Strips existing model prefixes before routing (e.g. anthropic/claude-3claude-3openai/claude-3).
Chinese LLM gateway with OpenAI-compatible API.Configuration:
{
  "providers": {
    "siliconflow": {
      "apiKey": "your-key"
    }
  }
}
Get API Key: siliconflow.cnDefault API Base: https://api.siliconflow.cn/v1
ByteDance’s LLM gateway.Configuration:
{
  "providers": {
    "volcengine": {
      "apiKey": "your-key"
    }
  }
}
Get API Key: volcengine.comDefault API Base: https://ark.cn-beijing.volces.com/api/v3Coding Plan: For VolcEngine coding plan, set "apiBase": "https://ark.cn-beijing.volces.com/api/coding/v3"

Standard Providers

Direct connections to individual LLM providers.
Direct access to Claude models.Configuration:
{
  "providers": {
    "anthropic": {
      "apiKey": "sk-ant-xxx"
    }
  },
  "agents": {
    "defaults": {
      "model": "claude-opus-4-5"
    }
  }
}
Get API Key: console.anthropic.comKeywords: anthropic, claudeFeatures:
  • Native Claude models
  • Prompt caching support
  • No prefix required
Direct access to GPT models.Configuration:
{
  "providers": {
    "openai": {
      "apiKey": "sk-xxx"
    }
  },
  "agents": {
    "defaults": {
      "model": "gpt-4o"
    }
  }
}
Get API Key: platform.openai.comKeywords: openai, gpt
DeepSeek models (China-based).Configuration:
{
  "providers": {
    "deepseek": {
      "apiKey": "sk-xxx"
    }
  },
  "agents": {
    "defaults": {
      "model": "deepseek-chat"
    }
  }
}
Get API Key: platform.deepseek.comModel Prefix: deepseek/ (auto-added)Keywords: deepseek
Google’s Gemini models.Configuration:
{
  "providers": {
    "gemini": {
      "apiKey": "your-key"
    }
  },
  "agents": {
    "defaults": {
      "model": "gemini-pro"
    }
  }
}
Get API Key: aistudio.google.comModel Prefix: gemini/ (auto-added)
Moonshot AI’s Kimi models.Configuration:
{
  "providers": {
    "moonshot": {
      "apiKey": "sk-xxx",
      "apiBase": "https://api.moonshot.ai/v1"
    }
  },
  "agents": {
    "defaults": {
      "model": "kimi-k2.5"
    }
  }
}
Get API Key: platform.moonshot.cnModel Prefix: moonshot/ (auto-added)Default API Base: https://api.moonshot.ai/v1 (international) or https://api.moonshot.cn/v1 (China)Note: Kimi K2.5 requires temperature >= 1.0 (enforced automatically).
Zhipu AI’s GLM models.Configuration:
{
  "providers": {
    "zhipu": {
      "apiKey": "your-key"
    }
  },
  "agents": {
    "defaults": {
      "model": "glm-4"
    }
  }
}
Get API Key: open.bigmodel.cnModel Prefix: zai/ (auto-added)Keywords: zhipu, glm, zaiCoding Plan: For Zhipu coding plan, set "apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"
Alibaba Cloud’s Qwen models.Configuration:
{
  "providers": {
    "dashscope": {
      "apiKey": "sk-xxx"
    }
  },
  "agents": {
    "defaults": {
      "model": "qwen-max"
    }
  }
}
Get API Key: dashscope.console.aliyun.comModel Prefix: dashscope/ (auto-added)Keywords: qwen, dashscope
MiniMax models.Configuration:
{
  "providers": {
    "minimax": {
      "apiKey": "your-key"
    }
  },
  "agents": {
    "defaults": {
      "model": "MiniMax-M2.1"
    }
  }
}
Get API Key: platform.minimaxi.comDefault API Base: https://api.minimax.io/v1 (international) or https://api.minimaxi.com/v1 (China)Note: For mainland China API, set "apiBase": "https://api.minimaxi.com/v1"
High-speed inference for voice transcription (Whisper) and LLM.Configuration:
{
  "providers": {
    "groq": {
      "apiKey": "gsk_xxx"
    }
  }
}
Get API Key: console.groq.comUse Case: Mainly used for free Whisper voice transcription in Telegram/Discord.

OAuth Providers

Providers that use OAuth authentication instead of API keys.
ChatGPT Codex models via OAuth (requires ChatGPT Plus/Pro).Login:
nanobot provider login openai-codex
Configuration:
{
  "agents": {
    "defaults": {
      "model": "openai-codex/gpt-5.1-codex"
    }
  }
}
Requirements: ChatGPT Plus or Pro subscription
GitHub Copilot models via OAuth.Login:
nanobot provider login github-copilot
Configuration:
{
  "agents": {
    "defaults": {
      "model": "github_copilot/gpt-4o"
    }
  }
}

Local/Self-Hosted

Self-hosted models via vLLM or any OpenAI-compatible server.Start Server:
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
Configuration:
{
  "providers": {
    "vllm": {
      "apiKey": "dummy",
      "apiBase": "http://localhost:8000/v1"
    }
  },
  "agents": {
    "defaults": {
      "model": "meta-llama/Llama-3.1-8B-Instruct"
    }
  }
}
Note: API key can be any non-empty string for local servers.
Connect to any OpenAI-compatible endpoint (LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI).Configuration:
{
  "providers": {
    "custom": {
      "apiKey": "your-api-key",
      "apiBase": "https://api.your-provider.com/v1"
    }
  },
  "agents": {
    "defaults": {
      "model": "your-model-name"
    }
  }
}
Note: Model name is passed as-is without LiteLLM transformation. For local servers without authentication, set "apiKey": "no-key".

Example: Multi-Provider Setup

{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-xxx"
    },
    "anthropic": {
      "apiKey": "sk-ant-xxx"
    },
    "groq": {
      "apiKey": "gsk_xxx"
    }
  },
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5",
      "provider": "openrouter"
    }
  }
}
This setup:
  • Uses OpenRouter as the primary LLM gateway
  • Has Anthropic configured for direct Claude access
  • Enables Groq for voice transcription

Provider Selection Priority

  1. Explicit provider prefix in model name: github-copilot/model always uses GitHub Copilot
  2. Model name keywords: claude-opus auto-matches Anthropic
  3. Gateway detection: API key prefix (sk-or-) or base URL keyword
  4. Fallback: First configured provider with an API key

Next Steps

Channels

Connect nanobot to chat platforms

Security

Configure access controls and sandboxing

Build docs developers (and LLMs) love