Overview
Portkey brings production readiness to LangChain applications:- Connect to 250+ models through a unified API
- View 42+ metrics & logs for all requests
- Enable semantic cache to reduce latency & costs
- Implement automatic retries & fallbacks
- Add custom tags for better tracking and analysis
Installation
Quick Start
Since Portkey is fully compatible with the OpenAI signature, you can connect through LangChain’sChatOpenAI interface.
Get Your API Keys
Sign up at Portkey and get your API key. Add your LLM provider API key as a Virtual Key in Portkey.
Switching Providers
One of Portkey’s key benefits is easy provider switching. Change providers with just 2 lines:Advanced Routing
Use Portkey’s gateway configs for load balancing, fallbacks, and retries.Load Balancing
Distribute traffic between multiple models or providers:Fallback Strategy
Automatically fallback to another provider on failures:Automatic Retries
LangChain Chains and Agents
Portkey works seamlessly with LangChain chains and agents:Adding Metadata and Tracing
Enhance observability with metadata and custom traces:Caching
Enable semantic caching to reduce costs and latency:Streaming
Portkey supports streaming responses:Monitoring and Analytics
All requests through Portkey are automatically logged. View detailed analytics in the Portkey dashboard:- Request/response logs
- Token usage and costs
- Latency metrics
- Error rates
- Custom metadata filters
Best Practices
Use Virtual Keys
Use Virtual Keys
Store your provider API keys as Virtual Keys in Portkey for better security and key rotation.
Implement Fallbacks
Implement Fallbacks
Always configure fallback providers for production applications to handle outages.
Enable Caching
Enable Caching
Use semantic caching for FAQ and support use cases to reduce costs by up to 50%.
Add Metadata
Add Metadata
Tag requests with user IDs, session IDs, and environment info for better debugging.
Example: Complete RAG Application
Resources
Questions? Join our Discord community or reach out to support.