Installation
Install LiteLLM using pip:Basic Usage
LiteLLM provides a simple, unified interface to call any LLM. All you need to do is set the appropriate environment variables and use thecompletion() function.
Supported Providers
LiteLLM supports 100+ providers. Here are examples of the most popular ones:Async Support
LiteLLM provides async support out of the box:Function Calling
LiteLLM standardizes function calling across all providers:Error Handling
LiteLLM provides OpenAI-compatible exceptions:Router with Fallbacks
The Router provides load balancing and automatic fallbacks:Embeddings
Generate embeddings with any provider:Image Generation
Generate images with supported providers:What’s Next?
Explore Providers
Learn about all 100+ supported providers and their capabilities
Caching
Enable caching to reduce costs and improve response times
Observability
Integrate with Langfuse, Lunary, MLflow, and other observability tools
Deploy Proxy
Deploy the AI Gateway for team-wide LLM access
Need Help? Join our Discord community or check out the full documentation.