Supported Providers
Helicone works with all major LLM providers:OpenAI
GPT-4, GPT-3.5, and more
Anthropic
Claude models
Azure OpenAI
Enterprise OpenAI deployment
Google Vertex AI
Gemini and PaLM models
AWS Bedrock
Multiple model providers
Together AI
Open source models
Integration Methods
Proxy Integration
The simplest way to integrate. Just change your API base URL:- No code changes beyond configuration
- Works with any SDK
- Real-time logging
- Full request/response capture
Async Integration
Log requests asynchronously without affecting latency:- Zero latency impact
- Uses your existing provider keys
- Background processing
Gateway Integration
Route through multiple providers with failover and load balancing:- Automatic failover between providers
- Load balancing
- Single API key for all providers
- Cost optimization
Framework Integrations
Helicone integrates with popular AI frameworks:LangChain
Full chain observability
Vercel AI SDK
Streaming and edge support
LlamaIndex
RAG pipeline tracking
Instructor
Structured output logging
Quick Start by Provider
Getting Your API Key
To use any integration method, you’ll need a Helicone API key:Sign up
Create an account at helicone.ai
Next Steps
OpenAI Integration
Complete setup guide for OpenAI
Anthropic Integration
Integrate with Claude models
Custom Headers
Add metadata and properties
Caching
Enable request caching