Overview
LLM Analytics offers:- Trace collection - Capture LLM calls, tool usage, and conversation flows
- Automated evaluations - LLM-as-judge for quality, hallucinations, and toxicity
- Provider proxy - Route requests through PostHog with provider key management
- Clustering - Group similar traces to identify patterns
- Cost tracking - Monitor token usage and costs across providers
Getting Started
Install SDK
Capture LLM Traces
Track Tool Usage
Capture function calls and tool usage:Automated Evaluations
Run LLM-as-judge evaluations on your traces:Creating an Evaluation
Evaluation Types
Model Configuration
Configure evaluation models:- OpenAI (GPT-3.5, GPT-4)
- Anthropic (Claude)
- Google (Gemini)
- OpenRouter (multiple models)
- Fireworks AI
Clustering
Automatically group similar traces:Provider Proxy
Route LLM requests through PostHog for unified tracking:Setup Provider Keys
Use Proxy
Benefits
- Automatic tracing - No manual instrumentation needed
- Cost tracking - Monitor spend across providers
- Provider fallback - Automatic failover to backup providers
- Rate limiting - Shared rate limit management
Datasets
Create datasets for offline evaluation:Metrics and Monitoring
Trace Metrics
Track key metrics:Error Tracking
Monitor errors and failures:Latency Tracking
Prometheus metrics for latency:Sentiment Analysis
Analyze conversation sentiment:Summarization
Generate trace summaries:Best Practices
Structured Traces
Use consistent trace naming and include relevant metadata. This makes filtering,
clustering, and analysis much more effective.
Evaluation Coverage
Start with a few key evaluations (quality, hallucination) rather than many.
Refine based on actual issues you discover.
Cost Monitoring
Set up alerts for unusual cost spikes. Track cost per trace to identify
expensive patterns early.
Provider Keys
Use the proxy with BYOK (bring your own keys) for production. This gives you
automatic tracing without SDK changes.