Observability Stack
JARVIS uses multiple layers of observability:- Laminar - LLM call tracing and accuracy verification
- Loguru - Structured application logging
- FastAPI - Built-in request/response logging
- Metrics - Performance and throughput monitoring
Logging
Loguru Configuration
JARVIS uses Loguru for structured logging:Log Levels
Configure log level via environment variable:Structured Logging
Loguru supports structured logging with context:Log Output
Development mode shows colorized console output:Laminar Tracing
Laminar provides observability for LLM calls and agent behavior.Setup
Get Laminar API key
Sign up at lmnr.ai and create a project.
Tracing Functions
Use the@observe decorator to trace functions:
Custom Trace Decorator
JARVIS provides a custom@traced decorator that combines logging and Laminar:
backend/observability/laminar.py
@traced decorator:
- Logs start and end with duration
- Sends spans to Laminar (if configured)
- Handles both sync and async functions
- Includes metadata and tags for filtering
Viewing Traces
Access the Laminar dashboard at app.lmnr.ai to:- View LLM call traces with full prompts and responses
- Analyze token usage and costs
- Debug failed requests
- Track accuracy and hallucinations
- Monitor agent execution flows

Example Trace
A typical pipeline trace shows:Performance Metrics
Built-in Metrics
The/health endpoint includes performance metrics:
Custom Metrics
Track custom metrics with thetraced decorator:
Error Tracking
Exception Logging
Loguru automatically captures exception context:Error Context
Add context to errors:Sentry Integration (Optional)
For production deployments, integrate Sentry:Debugging Tools
FastAPI Debug Mode
Enable debug mode for development:Interactive API Docs
FastAPI provides interactive documentation:- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
- Test API endpoints interactively
- View request/response schemas
- Debug authentication issues
Request Logging
Log all incoming requests:Production Monitoring
Health Checks
Implement comprehensive health checks:Alerting
Set up alerts for:- API errors (>5% error rate)
- High latency (>5s p95)
- Service unavailability
- Rate limit exhaustion
Monitoring Dashboard
Create a monitoring dashboard with:- Request metrics: throughput, latency, errors
- LLM metrics: token usage, cost, latency
- Agent metrics: success rate, duration
- System metrics: CPU, memory, disk
Best Practices
Use structured logging
Use structured logging
Always use structured fields instead of string interpolation:
Trace critical paths
Trace critical paths
Use
@observe or @traced on all functions that:- Make LLM API calls
- Call external services
- Are part of the critical path
- Have significant latency
Include context in logs
Include context in logs
Add relevant IDs to every log message:
Sample high-volume logs
Sample high-volume logs
For very frequent operations, sample logs:
Troubleshooting
Laminar Not Receiving Traces
- Verify API key is set:
echo $LMNR_PROJECT_API_KEY - Check initialization logs:
grep "Laminar" logs.txt - Test with a simple trace:
- View traces at app.lmnr.ai
Missing Log Context
If log messages are missing context:Performance Impact
If observability is impacting performance:- Reduce log level in production:
SPECTER_LOG_LEVEL=WARNING - Sample traces: only trace 10% of requests
- Disable debug mode:
debug=Falsein FastAPI - Use async logging (Loguru does this by default)
Next Steps
Laminar Tracing
Deep dive into Laminar integration
Performance
Optimize for production performance
Deployment
Deploy with production monitoring
Troubleshooting
Debug common issues