What are Providers?
LiteLLM provides a unified interface to call 100+ LLM providers using the OpenAI format. Instead of learning each provider’s unique API, you can use the same code structure across all providers.Unified Interface
Use the same
completion() function for all providers - just change the model name prefixOpenAI Format
All responses follow OpenAI’s format, making it easy to switch between providers
100+ Providers
Access models from OpenAI, Anthropic, AWS, Google, Azure, and many more
Provider Features
Streaming, function calling, vision, embeddings - all standardized across providers
Quick Start
Here’s how easy it is to use different providers:Supported Endpoints
LiteLLM standardizes access to multiple endpoint types:| Endpoint | Description | Supported Providers |
|---|---|---|
/chat/completions | Text generation | 100+ providers |
/embeddings | Text embeddings | OpenAI, Azure, Bedrock, Cohere, Vertex AI, HuggingFace, and more |
/images/generations | Image generation | OpenAI, Azure, Vertex AI, Bedrock, and more |
/audio/transcriptions | Speech-to-text | OpenAI, Azure, Groq, Deepgram |
/audio/speech | Text-to-speech | OpenAI, Azure, ElevenLabs |
/moderations | Content moderation | OpenAI, Azure |
/batches | Batch processing | OpenAI, Azure, Anthropic, Bedrock |
/rerank | Document reranking | Cohere, HuggingFace, Bedrock |
Provider Categories
- Major Cloud Providers
- Specialized Providers
- Local & Self-Hosted
OpenAI
GPT-4o, GPT-4o-mini, O1, O3-mini, and more
Anthropic
Claude 4.6, Claude 3.7, Claude 3.5 Sonnet
AWS Bedrock
Claude, Llama, Mistral, Nova, and more on AWS
Google Vertex AI
Gemini 2.0, Gemini 1.5 Pro/Flash on Google Cloud
Azure OpenAI
GPT-4, GPT-3.5, and more on Azure
Key Features Across Providers
Streaming Support
All major providers support streaming responses for real-time output.Function Calling
LiteLLM standardizes function/tool calling across providers.Vision/Multimodal
Send images to vision-capable models using a consistent format.Provider-Specific Features
While LiteLLM provides a unified interface, each provider has unique capabilities:| Feature | Providers |
|---|---|
| Prompt Caching | Anthropic, Vertex AI |
| JSON Mode | OpenAI, Azure, Anthropic, Vertex AI |
| Vision Models | OpenAI, Anthropic, Vertex AI, Azure |
| Batch API | OpenAI, Azure, Anthropic, Bedrock |
| Reasoning Models | OpenAI (O1, O3), Anthropic (Claude 4.6) |
| Computer Use | Anthropic Claude |
| Web Search | Anthropic Claude, Perplexity |
Model Naming Convention
LiteLLM uses aprovider/model-name format:
Authentication
Each provider has its own authentication method:- API Keys
- Cloud Credentials
- Local Providers
Most providers use API keys set via environment variables:
Error Handling
LiteLLM standardizes error handling across all providers:Next Steps
OpenAI
Get started with OpenAI models
Anthropic
Use Claude models with advanced features
Streaming
Learn about streaming responses
Function Calling
Implement tool and function calling
Additional Resources
- Complete Provider List - Full list of 100+ supported providers
- Model Prices - Browse all available models with pricing
- Proxy Server - Use providers through the LiteLLM Gateway
- Router - Load balancing and fallbacks across providers