Skip to main content
Observatory provides automatic instrumentation for Vercel AI SDK through OpenTelemetry integration. This enables comprehensive observability for AI-powered Next.js applications without modifying your existing code.

Overview

The @contextcompany/otel package integrates with Vercel’s OpenTelemetry implementation to automatically capture:
  • LLM calls and streaming responses
  • Tool/function calls and results
  • Token usage and costs
  • Latency and performance metrics
  • Errors and exceptions

Installation

1

Install the package

npm install @contextcompany/otel @vercel/otel @opentelemetry/api
2

Set your API key

Add your Observatory API key to your environment variables:
.env.local
TCC_API_KEY=your_api_key_here
Get your API key from the Observatory dashboard.
3

Create instrumentation file

Create or update instrumentation.ts in your project root:
instrumentation.ts
import { registerOTelTCC } from '@contextcompany/otel/nextjs';

export function register() {
  registerOTelTCC();
}
For Next.js 15+, this file should be at the root of your project. For older versions, place it in the src directory if you’re using one.
4

Enable instrumentation in Next.js

Update your next.config.js to enable instrumentation:
next.config.js
const nextConfig = {
  experimental: {
    instrumentationHook: true,
  },
};

module.exports = nextConfig;
5

Restart your development server

npm run dev
Your AI SDK calls will now be automatically instrumented and sent to Observatory.

Configuration

The registerOTelTCC function accepts an optional configuration object:
instrumentation.ts
import { registerOTelTCC } from '@contextcompany/otel/nextjs';

export function register() {
  registerOTelTCC({
    apiKey: 'your_api_key',       // Optional: override TCC_API_KEY env var
    url: 'custom_endpoint',       // Optional: custom ingestion endpoint
    debug: true,                  // Optional: enable debug logging
    local: false,                 // Optional: enable local mode for testing
    config: {                     // Optional: Vercel OTel configuration
      serviceName: 'my-ai-app',
    },
  });
}

Configuration Options

apiKey
string
Your Observatory API key. Overrides the TCC_API_KEY environment variable.
url
string
Custom ingestion endpoint URL. Defaults to Observatory’s production endpoint.
debug
boolean
default:"false"
Enable debug logging to see detailed instrumentation information in the console.
local
boolean
default:"false"
Enable local mode for development. Starts a WebSocket server for real-time trace viewing without sending data to the cloud.
baseProcessor
SpanProcessor
Optional OpenTelemetry span processor to run alongside Observatory’s processor.
config
Partial<Configuration>
Additional configuration options passed to Vercel’s registerOTel function. See Vercel OTel docs for available options.

Usage Examples

Basic AI SDK Usage

No changes needed to your existing AI SDK code:
app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4'),
    messages,
  });

  return result.toDataStreamResponse();
}
This code automatically generates traces including:
  • LLM request and response
  • Token usage (prompt, completion, total)
  • Latency metrics
  • Model information

With Tools

Tool calls are automatically captured:
import { openai } from '@ai-sdk/openai';
import { generateText, tool } from 'ai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4'),
  messages: [{ role: 'user', content: 'What is the weather in SF?' }],
  tools: {
    getWeather: tool({
      description: 'Get the weather for a location',
      parameters: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        // Tool implementation
        return { temperature: 72, conditions: 'sunny' };
      },
    }),
  },
});
Observatory captures:
  • Tool definitions sent to the LLM
  • Tool call arguments
  • Tool execution results
  • Tool execution time

Streaming Responses

Streaming is fully supported:
app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4-turbo'),
    messages,
    onFinish: ({ text, usage, finishReason }) => {
      console.log('Stream completed:', { text, usage, finishReason });
    },
  });

  return result.toDataStreamResponse();
}
The complete streamed response is captured once the stream finishes.

Local Development Mode

For local development, enable local mode to view traces without sending them to the cloud:
instrumentation.ts
import { registerOTelTCC } from '@contextcompany/otel/nextjs';

export function register() {
  registerOTelTCC({
    local: true,
  });
}
This starts a WebSocket server on your local machine. You can connect to it using the Observatory local viewer.
Local mode can be used alongside cloud mode by providing an API key. Traces will be sent to both destinations.

Environment Variables

TCC_API_KEY
string
required
Your Observatory API key. Get it from the Observatory dashboard.
TCC_URL
string
Custom ingestion endpoint URL. Only needed if using a self-hosted instance.
TCC_DEBUG
string
Set to 1 or true to enable debug logging.

Submitting Feedback

You can submit user feedback for specific runs:
import { submitFeedback } from '@contextcompany/otel';

// After getting the runId from your trace
await submitFeedback({
  runId: 'run_abc123',
  score: 'thumbs_up', // or 'thumbs_down'
});

Troubleshooting

Traces not appearing

Ensure experimental.instrumentationHook is set to true in your next.config.js.
Verify your TCC_API_KEY is set correctly in your environment variables. Enable debug mode to see connection logs:
registerOTelTCC({ debug: true });
The instrumentation only works in the Node.js runtime. If you’re using Edge runtime, traces won’t be captured. Check your route files:
// This will NOT work with instrumentation
export const runtime = 'edge';
If you’re using a development API key (prefix dev_), ensure you’re not pointing to the production endpoint. The package automatically routes dev keys to the dev environment.

Performance impact

The instrumentation uses OpenTelemetry’s batching span processor, which:
  • Batches traces before sending
  • Sends traces asynchronously
  • Has minimal impact on request latency (typically less than 5ms)

Vercel Deployment

The instrumentation works seamlessly on Vercel:
  1. Ensure your TCC_API_KEY is added to your Vercel project’s environment variables
  2. The instrumentation hook is automatically enabled during deployment
  3. Traces will appear in Observatory for all production traffic
If using Vercel’s built-in OpenTelemetry integration, you may need to configure it to work alongside Observatory. Use the baseProcessor option to chain processors.

Next Steps

Widget

Learn about the real-time visualization widget

Custom Instrumentation

Add manual instrumentation for custom logic

API Reference

Complete API documentation

Feedback

Learn how to collect and analyze user feedback

Build docs developers (and LLMs) love