Anthropic SDK Integration
LLM Gateway supports the Anthropic SDK in two ways:- OpenAI-Compatible Format: Use the OpenAI SDK format (recommended)
- Native Anthropic Format: Use the native
/v1/messagesendpoint
OpenAI-Compatible Format (Recommended)
The easiest way to use LLM Gateway with Anthropic models is through the OpenAI SDK format:- Python
- Node.js
Native Anthropic Format
You can also use LLM Gateway’s native Anthropic/v1/messages endpoint with the official Anthropic SDK:
Streaming with Native Format
- Python
- Node.js
Tool Use (Function Calling)
- Python
- Node.js
Before and After Comparison
Python
Node.js
Extended Thinking (Reasoning)
Claude 3.7 Sonnet supports extended thinking for complex reasoning tasks:Prompt Caching
Anthropic’s prompt caching is automatically supported:Model Selection
When using the native Anthropic format, use Anthropic’s model names:Comparison: OpenAI vs Native Format
| Feature | OpenAI Format | Native Anthropic Format |
|---|---|---|
| Endpoint | /v1/chat/completions | /v1/messages |
| SDK | OpenAI SDK | Anthropic SDK |
| Response Format | OpenAI-compatible | Anthropic native |
| Streaming | ✅ Supported | ✅ Supported |
| Tool Use | ✅ Supported | ✅ Supported |
| Prompt Caching | ✅ Automatic | ✅ Full control |
| Extended Thinking | Via reasoning_effort | Via thinking parameter |
| Multi-provider | ✅ Works with all providers | ❌ Anthropic only |
Caveats and Limitations
- System Messages: In native format, system messages use a separate
systemparameter, not themessagesarray - Max Tokens:
max_tokensis required in native format but optional in OpenAI format - Response Structure: Native format returns Anthropic’s response structure with different field names
- Provider Lock-in: Native format only works with Anthropic models; OpenAI format supports all providers