Why Use AI Gateway?
One SDK for All Models
Use OpenAI SDK to access GPT, Claude, Gemini, and 100+ other models
No Rate Limits
Skip provider tier restrictions - use credits with 0% markup
Always Online
Automatic failover across providers keeps your app running
Unified Observability
Track usage, costs, and performance across all providers in one dashboard
How It Works
The AI Gateway sits between your application and LLM providers, acting as a unified translation layer:- You make one request - Use the OpenAI SDK format, regardless of which provider you want
- We translate & route - Helicone converts your request to the correct provider format (Anthropic, Google, etc.)
- Provider responds - The LLM provider processes your request
- We log & return - You get the response back while we capture metrics, costs, and errors
https://ai-gateway.helicone.ai
With credits, we manage provider API keys for you. Your requests automatically work with OpenAI, Anthropic, Google, and 100+ other providers without signing up for each one.
Quick Example
Add two lines to your existing OpenAI code to unlock 100+ models with automatic observability:Key Features
Unified API Access
Access 100+ models from different providers using the same OpenAI-compatible format:Automatic Provider Routing
The gateway automatically finds the best provider for your model:- Multiple providers per model - Access the same model through OpenAI, Azure, AWS Bedrock, etc.
- Intelligent selection - Routes to the cheapest available provider
- BYOK priority - Your provider keys are always tried first
- Load balancing - Distributes requests across equal-cost providers
Automatic Fallbacks
Never worry about provider outages again:- Rate limits (429)
- Authentication errors (401)
- Server errors (500+)
- Timeouts (408)
Built-in Observability
Every request through the gateway is automatically logged with:- Request/response data - Full conversation history
- Cost tracking - Accurate costs across all providers
- Performance metrics - Latency, tokens, and error rates
- Custom metadata - User tracking, sessions, properties
Prompt Management Integration
Deploy and manage prompts without code changes:Helicone vs OpenRouter
Helicone offers a complete platform for production AI applications, while OpenRouter focuses on simple model access.| Feature | Helicone | OpenRouter |
|---|---|---|
| Pricing | 0% markup | 5.5% markup |
| Observability | Full-featured (sessions, users, custom properties, cost tracking) | Basic (requests/costs per model only) |
| Session Tracking | ✅ | ❌ |
| Prompt Management | ✅ | ❌ |
| Caching | ✅ | ❌ |
| Custom Rate Limits | ✅ | ❌ |
| LLM Security | ✅ | ❌ |
| Open Source | ✅ | ❌ |
| BYOK | ✅ | ✅ |
| Automatic Fallbacks | ✅ | ✅ |
Migrating from OpenRouter?
Migrating from OpenRouter?
See our OpenRouter migration guide for step-by-step instructions.
Getting Started
Quick Start Guide
Set up the gateway and make your first request in 5 minutes
Browse Models
See all supported models and provider formats
Provider Routing
Configure automatic routing and fallbacks for reliability
Fallback Strategies
Build resilient apps with automatic provider failover
How Credits Work
Instead of managing API keys for each provider, Helicone maintains the keys for you:- 0% markup - Pay exactly what providers charge
- No provider signup - Access 100+ models immediately
- Unified billing - Single invoice across all providers
- No rate limits - Skip provider tier restrictions
- Automatic fallbacks - Seamless failover between providers
Want to integrate a new model provider to the AI Gateway? Contact us on Discord or check our GitHub repository for contribution guidelines.