Skip to main content
Observatory provides seamless integration with Mastra through a custom AI tracing exporter. This enables automatic capture of all agent executions, workflows, and LLM calls in your Mastra applications.

Overview

The @contextcompany/mastra package provides a TCCMastraExporter that integrates with Mastra’s built-in AI tracing system to capture:
  • Agent executions and workflows
  • LLM calls (generations, embeddings, etc.)
  • Tool calls and results
  • Streaming responses
  • Token usage and costs
  • Custom metadata

Installation

1

Install the package

npm install @contextcompany/mastra
This package requires @mastra/core version 0.24.0 or higher as a peer dependency.
2

Set your API key

Add your Observatory API key to your environment variables:
.env
TCC_API_KEY=your_api_key_here
3

Configure the exporter

Add the Observatory exporter to your Mastra configuration:
mastra.config.ts
import { Mastra } from '@mastra/core';
import { TCCMastraExporter } from '@contextcompany/mastra';

export const mastra = new Mastra({
  aiTracing: {
    exporters: [
      new TCCMastraExporter(),
    ],
  },
});

Configuration

The TCCMastraExporter accepts an optional configuration object:
mastra.config.ts
import { TCCMastraExporter } from '@contextcompany/mastra';

export const mastra = new Mastra({
  aiTracing: {
    exporters: [
      new TCCMastraExporter({
        apiKey: 'your_api_key',      // Optional: override TCC_API_KEY env var
        endpoint: 'custom_url',       // Optional: custom ingestion endpoint
        debug: true,                  // Optional: enable debug logging
      }),
    ],
  },
});

Configuration Options

apiKey
string
Your Observatory API key. Overrides the TCC_API_KEY environment variable.
endpoint
string
Custom ingestion endpoint URL. Defaults to Observatory’s production endpoint.
debug
boolean
default:"false"
Enable debug logging to see detailed trace information in the console.

Usage Examples

Basic Agent Usage

Once configured, all your Mastra agents are automatically instrumented:
import { Agent } from '@mastra/core';
import { openai } from '@mastra/openai';

const agent = new Agent({
  name: 'weather-assistant',
  model: openai('gpt-4'),
  instructions: 'You are a helpful weather assistant.',
});

// This execution is automatically traced
const result = await agent.generate('What\'s the weather in Tokyo?');
console.log(result.text);
Observatory captures:
  • Agent configuration
  • User prompt
  • Model selection
  • Complete response
  • Token usage
  • Execution time

Tracking Runs with Custom Metadata

Add custom metadata to track runs and sessions:
const result = await agent.generate(
  'Tell me about the weather',
  {
    metadata: {
      'tcc.runId': 'run_abc123',      // Custom run ID
      'tcc.sessionId': 'session_xyz',  // Session grouping
      userId: 'user_456',               // Custom metadata
      source: 'mobile-app',
    },
  }
);
Use the tcc.runId metadata key to specify a custom run ID. If not provided, Observatory generates one automatically.

Using Workflows

Mastra workflows are fully supported:
import { Workflow, Step } from '@mastra/core';
import { z } from 'zod';

const workflow = new Workflow({
  name: 'data-pipeline',
  triggerSchema: z.object({
    input: z.string(),
  }),
});

workflow
  .step(new Step({
    id: 'analyze',
    execute: async ({ context }) => {
      const analysis = await agent.generate(
        `Analyze: ${context.triggerData.input}`
      );
      return { analysis: analysis.text };
    },
  }))
  .step(new Step({
    id: 'summarize',
    execute: async ({ context }) => {
      const summary = await agent.generate(
        `Summarize: ${context.stepResults.analyze.analysis}`
      );
      return { summary: summary.text };
    },
  }))
  .commit();

// Execute workflow - all steps are traced
const result = await workflow.execute({
  triggerData: { input: 'Climate change impacts' },
  metadata: {
    'tcc.runId': 'workflow_run_123',
  },
});
Each step in the workflow appears as a separate span in the trace, showing:
  • Step dependencies
  • Individual step timing
  • Data flow between steps

Using Tools

Tools are automatically captured:
import { Agent, createTool } from '@mastra/core';
import { z } from 'zod';

const weatherTool = createTool({
  id: 'get-weather',
  description: 'Get current weather for a location',
  inputSchema: z.object({
    location: z.string(),
  }),
  execute: async ({ location }) => {
    // Fetch weather data
    return {
      temperature: 72,
      conditions: 'sunny',
      location,
    };
  },
});

const agent = new Agent({
  name: 'weather-bot',
  model: openai('gpt-4'),
  tools: { getWeather: weatherTool },
});

const result = await agent.generate(
  'What\'s the weather in San Francisco?'
);
Observatory captures:
  • Tool definitions
  • Tool invocation arguments
  • Tool execution results
  • Tool execution time

Streaming Responses

Streaming is fully supported:
const stream = await agent.stream('Write a long story about AI');

for await (const chunk of stream) {
  process.stdout.write(chunk.text);
}
The complete streamed response is captured once the stream completes.

Run and Session Tracking

Run IDs

Specify a custom run ID using metadata:
const runId = crypto.randomUUID();

const result = await agent.generate('Hello', {
  metadata: {
    'tcc.runId': runId,
  },
});

// Later, submit feedback for this run
import { submitFeedback } from '@contextcompany/mastra';
await submitFeedback({ runId, score: 'thumbs_up' });

Session IDs

Group related runs together:
const sessionId = crypto.randomUUID();

// First message
await agent.generate('Hello, I need help', {
  metadata: { 'tcc.sessionId': sessionId },
});

// Follow-up message
await agent.generate('Can you explain more?', {
  metadata: { 'tcc.sessionId': sessionId },
});

Submitting Feedback

Collect user feedback for specific runs:
import { submitFeedback } from '@contextcompany/mastra';

const runId = 'run_abc123';

// After the agent execution
await submitFeedback({
  runId,
  score: 'thumbs_up', // or 'thumbs_down'
});

Environment Variables

TCC_API_KEY
string
required
Your Observatory API key. Get it from the Observatory dashboard.
TCC_URL
string
Custom ingestion endpoint URL. Only needed for self-hosted instances.

How It Works

The integration works through Mastra’s AI tracing system:
  1. Span Collection: The exporter receives span events from Mastra’s tracer
  2. Batching: Spans are collected in memory until the root span (agent run) completes
  3. Export: When the root span ends, all spans are sent to Observatory in a single batch
  4. Metadata Extraction: Custom metadata and run IDs are extracted from the root span
This approach ensures:
  • Complete traces (all spans are sent together)
  • Accurate timing (captures the full execution)
  • Minimal overhead (single network request per run)

API Reference

TCCMastraExporter

Implements Mastra’s AITracingExporter interface. Constructor:
new TCCMastraExporter(config?: TCCMastraExporterConfig)
Config Options:
apiKey
string
API key for authentication. Defaults to TCC_API_KEY environment variable.
endpoint
string
Custom ingestion endpoint. Defaults to Observatory’s production endpoint.
debug
boolean
default:"false"
Enable debug logging.
Methods:
  • exportEvent(event: AITracingEvent): Promise<void> - Called by Mastra for each span event
  • shutdown(): Promise<void> - Exports any remaining traces before shutdown
  • init(config: TracingConfig): void - Called by Mastra during initialization

Troubleshooting

  1. Verify TCC_API_KEY is set correctly
  2. Enable debug mode:
    new TCCMastraExporter({ debug: true })
    
  3. Check console for error messages
  4. Ensure agent execution completes (traces are sent after completion)
Make sure:
  1. Your Mastra version is 0.24.0 or higher
  2. AI tracing is enabled in your Mastra config
  3. The exporter is added to the exporters array
Metadata must be passed in the agent’s metadata option:
await agent.generate('prompt', {
  metadata: { key: 'value' }
});
Not in the prompt or other locations.
API keys with the dev_ prefix automatically route to the development endpoint. Production keys route to production. To override, use the endpoint config option.

Performance Considerations

  • Memory: Spans are held in memory until the root span completes. For very long-running agents, consider the memory footprint.
  • Network: One HTTP request per agent run after completion.
  • Latency: Zero impact on agent execution time - export happens asynchronously after completion.

Next Steps

Configuration

Learn about configuration options

Sessions & Runs

Learn more about tracking runs and sessions

Feedback

Set up user feedback collection

API Reference

Complete API documentation

Build docs developers (and LLMs) love