Available wrappers
OpenAI
Wrap OpenAI and Azure OpenAI clients
Anthropic
Wrap Anthropic Claude clients (experimental)
Gemini
Wrap Google Gemini clients (beta)
Quick comparison
| Wrapper | Status | Provider Detection | Streaming | Tool Calling | Usage Tracking |
|---|---|---|---|---|---|
| OpenAI | Stable | ✅ (OpenAI/Azure) | ✅ | ✅ | ✅ (w/ cache) |
| Anthropic | Experimental | ✅ | ✅ | ✅ | ✅ (w/ cache) |
| Gemini | Beta | ✅ | ✅ | ✅ | ✅ |
When to use wrappers
Wrappers are ideal when:- You want automatic tracing with minimal code changes
- You’re using supported LLM SDKs directly
- You want automatic metadata extraction (model, tokens, etc.)
- You need streaming support
When to use traceable()
Usetraceable() instead when:
- You’re building custom chains or workflows
- You need fine-grained control over traces
- You’re not using a supported SDK
- You want to trace non-LLM operations
Common usage pattern
All wrappers follow the same pattern:Shared features
All wrappers provide:Automatic metadata extraction
- Provider name (openai, anthropic, google)
- Model name
- Model type (chat, llm)
- Temperature
- Max tokens
- Stop sequences
Usage tracking
- Input tokens
- Output tokens
- Total tokens
- Cache hits (when applicable)
- Reasoning/thinking tokens (when applicable)
Streaming support
All wrappers handle streaming responses and aggregate them in the trace:Custom metadata
Pass additional metadata per-call:Nested tracing
Wrappers work seamlessly withtraceable():
Configuration options
All wrappers accept the same configuration:Error handling
Wrappers automatically log errors to LangSmith:Combining wrappers
You can use multiple wrappers in the same application:Performance considerations
Wrappers add minimal overhead:- Tracing is asynchronous and non-blocking
- Background batching reduces network calls
- No impact on streaming performance
Migration guide
From unwrapped to wrapped
From traceable to wrapper
Best practices
- Wrap once, use everywhere: Create wrapped clients at module level
- Use per-call metadata: Add context-specific metadata via
langsmithExtra - Combine with traceable: Use wrappers for LLM calls,
traceable()for chains - Check wrapper status: Be aware of experimental/beta features
- Don’t double-wrap: Wrapping the same client twice will throw an error