Overview
ZeroClaw supports custom API endpoints for both OpenAI-compatible and Anthropic-compatible providers. This enables integration with:- Local LLM servers (llama.cpp, SGLang, vLLM, Ollama)
- Corporate AI gateways
- Third-party OpenAI/Anthropic proxies
- Self-hosted models
Provider Types
OpenAI-Compatible (custom:)
For services implementing the OpenAI API format:
Provider prefix: custom:https://your-api.com
Endpoints:
POST /chat/completions- Chat completionGET /models- Model discovery (optional)
Anthropic-Compatible (anthropic-custom:)
For services implementing the Anthropic API format:
Provider prefix: anthropic-custom:https://your-api.com
Endpoints:
POST /v1/messages- Chat completion
Configuration Methods
Config File
Edit~/.zeroclaw/config.toml:
Environment Variables
For custom providers, use the generic API key variables:API Mode (OpenAI-compatible only)
Control which endpoint is called first:provider_api is only valid when using custom:<url>.
First-Class Local Providers
ZeroClaw includes dedicated providers for common local servers with optimized defaults.llama.cpp Server
Provider ID:llamacpp (alias: llama.cpp)
Default endpoint: http://localhost:8080/v1
Setup:
LLAMACPP_API_KEY only if server is started with --api-key.
SGLang Server
Provider ID:sglang
Default endpoint: http://localhost:30000/v1
Setup:
--tool-call-parser flag when launching SGLang.
vLLM Server
Provider ID:vllm
Default endpoint: http://localhost:8000/v1
Setup:
Hunyuan (Tencent)
Provider ID:hunyuan (alias: tencent)
Base URL: https://api.hunyuan.cloud.tencent.com/v1
Setup:
hunyuan-t1-latest, hunyuan-turbo-latest, hunyuan-pro
OpenAI Responses API (WebSocket)
For OpenAI-compatible endpoints: Auto mode: Whencustom: endpoint resolves to api.openai.com, ZeroClaw tries WebSocket first (wss://.../responses) and falls back to HTTP.
Manual override:
Credential Resolution
Forcustom: and anthropic-custom: providers:
- Explicit
api_keyfrom config ZEROCLAW_API_KEYenvironment variableAPI_KEYenvironment variable
OPENAI_API_KEY) are not used for custom endpoints.
Testing Configuration
Verify Endpoint
Model Discovery
Health Check
Troubleshooting
Authentication Errors
Symptom:401 Unauthorized
Solution:
- Verify API key is correct
- Check endpoint URL format (must include
http://orhttps://) - Ensure endpoint is accessible from your network
Model Not Found
Symptom:404 Model not found
Solution:
- Verify model name matches provider’s available models
- List available models:
- For gateways that don’t implement
/models, send a test request and check the error message:
Connection Issues
Symptom: Connection timeout or refused Solution:- Test endpoint accessibility:
- Check firewall/proxy settings
- Verify provider status page
- Try with verbose logging:
Examples
Local LLM Server
Corporate Proxy
Cloud Provider Gateway
Multi-Provider Fallback
Combine custom endpoints with fallback:API Format Requirements
OpenAI-Compatible
Request:Anthropic-Compatible
Request:Limitations
- Custom endpoints must match OpenAI or Anthropic API formats
- Provider-specific features may not be supported
- Model discovery depends on endpoint implementing
/models - Tool calling support depends on endpoint compatibility
- Vision support depends on endpoint capabilities