What is Observability?
Observability helps you understand what’s happening inside your AI applications by capturing:- Traces - Complete execution paths showing agent steps, tool calls, and model interactions
- Logs - Structured log messages with automatic trace correlation
- Metrics - Performance data like token usage, latency, and error rates
Key Features
Automatic Instrumentation
Mastra automatically traces:- Agent generation and streaming
- Workflow execution and steps
- Tool and function calls
- Model API requests
- MCP (Model Context Protocol) tool execution
- Memory operations
Span Types
Mastra creates different span types for different operations:Trace Attributes
Each span captures relevant metadata:Quick Start
Basic observability setup:Supported Platforms
Mastra integrates with popular observability platforms:- Langfuse - LLM observability and prompt management
- Braintrust - AI product analytics
- OpenTelemetry - Standard telemetry protocol
- Custom exporters - Build your own integration
Trace Visualization
Traces show the complete execution flow:Token Usage Tracking
Automatic tracking of token consumption:Context Propagation
Request context automatically flows through traces:Privacy Controls
Control what data is captured:Configuration Options
Benefits
Debug Issues
Understand failures by viewing complete execution traces
Optimize Performance
Identify bottlenecks and reduce latency
Monitor Costs
Track token usage and API costs
Improve Quality
Add feedback and scores to production traces
Next Steps
Tracing
Learn about OpenTelemetry tracing
Logging
Configure structured logging