Skip to main content
Mastra’s observability system provides comprehensive tracing, logging, and metrics for your AI agents and workflows. Built on OpenTelemetry standards, it integrates with popular observability platforms.

What is Observability?

Observability helps you understand what’s happening inside your AI applications by capturing:
  • Traces - Complete execution paths showing agent steps, tool calls, and model interactions
  • Logs - Structured log messages with automatic trace correlation
  • Metrics - Performance data like token usage, latency, and error rates

Key Features

Automatic Instrumentation

Mastra automatically traces:
  • Agent generation and streaming
  • Workflow execution and steps
  • Tool and function calls
  • Model API requests
  • MCP (Model Context Protocol) tool execution
  • Memory operations

Span Types

Mastra creates different span types for different operations:
enum SpanType {
  AGENT_RUN = 'agent_run',           // Agent execution
  MODEL_GENERATION = 'model_generation', // Model API calls
  MODEL_STEP = 'model_step',          // Individual model steps
  TOOL_CALL = 'tool_call',            // Tool execution
  WORKFLOW_RUN = 'workflow_run',      // Workflow execution
  WORKFLOW_STEP = 'workflow_step',    // Workflow steps
  PROCESSOR_RUN = 'processor_run',    // Input/output processors
  MCP_TOOL_CALL = 'mcp_tool_call',    // MCP tool calls
}

Trace Attributes

Each span captures relevant metadata:
// Agent Run Attributes
{
  conversationId: 'thread-123',
  instructions: 'You are a helpful assistant',
  availableTools: ['calculator', 'search'],
  maxSteps: 5
}

// Model Generation Attributes
{
  model: 'gpt-4',
  provider: 'openai',
  usage: {
    inputTokens: 150,
    outputTokens: 75,
    inputDetails: {
      text: 100,
      cacheRead: 50  // Cached tokens
    }
  },
  temperature: 0.7,
  streaming: true
}

// Tool Call Attributes
{
  toolType: 'function',
  toolDescription: 'Search the web',
  success: true
}

Quick Start

Basic observability setup:
import { Mastra } from '@mastra/core';

const mastra = new Mastra({
  observability: {
    enabled: true,
    
    // Optional: Configure providers
    providers: {
      langfuse: {
        publicKey: process.env.LANGFUSE_PUBLIC_KEY,
        secretKey: process.env.LANGFUSE_SECRET_KEY,
      },
    },
  },
  
  agents: {
    myAgent: {
      name: 'My Agent',
      instructions: 'You are helpful',
      model: 'gpt-4',
    },
  },
});

// All agent calls are automatically traced
const result = await mastra.getAgent('myAgent').generate({
  messages: [{ role: 'user', content: 'Hello' }],
  
  // Optional: Add trace metadata
  tracingOptions: {
    metadata: { userId: '123', sessionId: 'abc' },
    tags: ['production', 'user-query'],
  },
});

console.log('Trace ID:', result.traceId);

Supported Platforms

Mastra integrates with popular observability platforms:
  • Langfuse - LLM observability and prompt management
  • Braintrust - AI product analytics
  • OpenTelemetry - Standard telemetry protocol
  • Custom exporters - Build your own integration

Trace Visualization

Traces show the complete execution flow:
Agent Run (agent_run)
├── Model Generation (model_generation)
│   ├── Model Step 0 (model_step)
│   │   └── Tool Selection
│   └── Model Step 1 (model_step)
│       └── Response Generation
├── Tool Call: calculator (tool_call)
│   └── Result: 42
└── Processor Run (processor_run)
    └── Format output

Token Usage Tracking

Automatic tracking of token consumption:
// Access token usage from result
const result = await agent.generate({
  messages: [{ role: 'user', content: 'Hello' }],
});

// Token details are captured in traces
const trace = await mastra.getTrace(result.traceId!);
console.log(trace.rootSpan.attributes?.usage);
// {
//   inputTokens: 150,
//   outputTokens: 75,
//   inputDetails: {
//     text: 100,
//     cacheRead: 50  // Anthropic cache hits
//   }
// }

Context Propagation

Request context automatically flows through traces:
import { RequestContext } from '@mastra/core/request-context';

const requestContext = new RequestContext();
requestContext.set('userId', 'user-123');
requestContext.set('sessionId', 'session-abc');

const result = await agent.generate({
  messages: [{ role: 'user', content: 'Hello' }],
  requestContext,
  
  // Extract context fields as trace metadata
  tracingOptions: {
    requestContextKeys: ['userId', 'sessionId'],
  },
});

Privacy Controls

Control what data is captured:
const result = await agent.generate({
  messages: [{ role: 'user', content: 'Sensitive data' }],
  
  tracingOptions: {
    hideInput: true,   // Don't log input messages
    hideOutput: true,  // Don't log output messages
  },
});

Configuration Options

interface ObservabilityConfig {
  enabled?: boolean;           // Enable/disable observability
  
  providers?: {                // Platform integrations
    langfuse?: LangfuseConfig;
    braintrust?: BraintrustConfig;
    opentelemetry?: OtelConfig;
  };
  
  requestContextKeys?: string[]; // Auto-extract context fields
  
  samplingRate?: number;       // Sample rate (0-1)
  
  hideInput?: boolean;         // Hide input by default
  hideOutput?: boolean;        // Hide output by default
}

Benefits

Debug Issues

Understand failures by viewing complete execution traces

Optimize Performance

Identify bottlenecks and reduce latency

Monitor Costs

Track token usage and API costs

Improve Quality

Add feedback and scores to production traces

Next Steps

Tracing

Learn about OpenTelemetry tracing

Logging

Configure structured logging

Build docs developers (and LLMs) love