Skip to main content
This guide will walk you through setting up Observatory in a Next.js application using the Vercel AI SDK. You’ll have real-time AI agent observability running locally with no account or API key required.
This quickstart focuses on local mode with Next.js + AI SDK, which is currently the primary supported setup. For other frameworks, see AI SDK, Claude, Mastra, or Custom.

Prerequisites

Before you begin, make sure you have:
  • Node.js 18+ installed
  • An existing Next.js project (or create one with npx create-next-app@latest)
  • The Vercel AI SDK installed (npm install ai)
  • An LLM provider API key (OpenAI, Anthropic, etc.)

Step 1: Install Dependencies

Install the required Observatory packages along with OpenTelemetry dependencies:
npm install @contextcompany/otel @vercel/otel @opentelemetry/api
The @vercel/otel package provides Next.js-specific OpenTelemetry utilities that Observatory uses for instrumentation.

Step 2: Add Instrumentation to Next.js

Create an instrumentation.ts file in the root directory of your project (or inside the src folder if you’re using one). This file is used by Next.js to set up observability before your application starts.
instrumentation.ts
export async function register() {
  if (process.env.NEXT_RUNTIME === "nodejs") {
    const { registerOTelTCC } = await import("@contextcompany/otel/nextjs");
    registerOTelTCC({ local: true });
  }
}
Setting local: true enables local-first mode with no account or API key required. Observatory will run completely offline and display traces in your browser via the widget.
You can customize the instrumentation with additional options:
registerOTelTCC({
  local: true,           // Enable local mode
  debug: true,           // Enable debug logging
});
For production use with The Context Company backend:
registerOTelTCC();
Then set your API key as an environment variable:
.env
TCC_API_KEY=your_api_key_here
Make sure the instrumentation.ts file is in the correct location:
  • If using src/ directory: src/instrumentation.ts
  • If not using src/: instrumentation.ts at project root
See the Next.js Instrumentation guide for more details.

Step 3: Add the Visualization Widget

Add the Observatory widget to your root layout. This provides the real-time visualization overlay in your browser.
app/layout.tsx
import Script from "next/script";

export default function RootLayout({
  children,
}: {
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <head>
        {/* Add The Context Company widget */}
        <Script
          crossOrigin="anonymous"
          src="//unpkg.com/@contextcompany/widget/dist/auto.global.js"
        />
        {/* Your other scripts */}
      </head>
      <body>{children}</body>
    </html>
  );
}
The widget is loaded from unpkg.com for convenience. For production use or air-gapped environments, you can self-host the widget bundle.

Step 4: Enable Telemetry in AI SDK Calls

As of AI SDK v5, telemetry is experimental and requires the experimental_telemetry flag. Add this flag to all AI SDK calls you want to instrument.

Basic Example

app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4"),
    messages: messages,
    experimental_telemetry: { isEnabled: true }, // Required for Observatory
  });

  return result.toDataStreamResponse();
}

With Session and Run Tracking

For better observability, track sessions (entire conversations) and runs (individual AI calls):
app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
import { randomUUID } from "crypto";

export async function POST(req: Request) {
  const body = await req.json();
  const { messages } = body;

  // Track conversation session across requests
  const sessionId = body.sessionId || randomUUID();
  // Track this specific AI call
  const runId = randomUUID();

  const result = streamText({
    model: openai("gpt-4"),
    messages: messages,
    experimental_telemetry: {
      isEnabled: true,
      metadata: {
        "tcc.runId": runId,        // Links to specific run
        "tcc.sessionId": sessionId, // Groups related interactions
      },
    },
  });

  return result.toDataStreamResponse();
}

With Tool Calling

Observatory automatically traces tool calls when you use AI SDK tools:
app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText, tool } from "ai";
import { z } from "zod";

const weatherTool = tool({
  description: "Get current weather for a location",
  parameters: z.object({
    location: z.string().describe("City name"),
  }),
  execute: async ({ location }) => {
    // Your weather API call
    return { temp: 72, condition: "Sunny" };
  },
});

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4"),
    messages: messages,
    tools: {
      getWeather: weatherTool,
    },
    experimental_telemetry: { isEnabled: true },
  });

  return result.toDataStreamResponse();
}
Observatory will automatically capture:
  • Tool definitions
  • Tool arguments
  • Tool results
  • Execution time
  • Any errors

Step 5: Test Your Setup

Start your Next.js development server:
npm run dev
Then:
  1. Open your application in a browser
  2. Trigger an AI interaction (chat message, etc.)
  3. Look for the Observatory widget in the bottom-right corner
  4. Click the widget to see your AI traces in real-time
Observatory widget showing traces

What You’ll See

The Observatory widget displays:
  • Request details — Model name, prompt, system message
  • Response data — LLM output, finish reason
  • Token usage — Input tokens (cached/uncached), output tokens
  • Timing — Total duration, time to first token
  • Tool calls — Arguments, results, execution time
  • Metadata — Session ID, run ID, custom metadata

Troubleshooting

Check these common issues:
  1. Verify instrumentation.ts is in the correct location (root or src/)
  2. Ensure experimental_telemetry.isEnabled is set to true
  3. Restart your Next.js dev server after adding instrumentation
  4. Check browser console for any errors
  5. Verify the widget script is loading (check Network tab)
Enable debug logging:
instrumentation.ts
registerOTelTCC({ local: true, debug: true });
Possible causes:
  1. Script tag is missing from layout
  2. CSP (Content Security Policy) is blocking the script
  3. Ad blocker is interfering
Check if script loaded:Open browser DevTools → Network tab → Filter by “widget” → Look for the auto.global.js file
Important notes:
  1. Make sure instrumentation.ts is included in your build
  2. For Vercel deployments, instrumentation is automatically supported
  3. For other hosting providers, ensure Node.js runtime is used
  4. Check that NEXT_RUNTIME check isn’t being stripped by your bundler
Common reasons:
  1. Missing experimental_telemetry.isEnabled: true on some calls
  2. Some calls are happening before instrumentation is registered
  3. Calls are being made from edge runtime (not currently supported)
Solution: Add the telemetry flag to all AI SDK calls:
// ❌ This won't be traced
streamText({ model, messages })

// ✅ This will be traced
streamText({ 
  model, 
  messages,
  experimental_telemetry: { isEnabled: true }
})

Next Steps

Now that you have Observatory running, explore these topics:

Session & Run Tracking

Learn how to track conversations and individual AI calls

User Feedback

Collect and link user feedback to specific agent runs

Custom Metadata

Add custom metadata to filter and group traces

Configuration

Configure Observatory for different environments

Example Application

For a complete working example, check out the Next.js AI SDK example in the Observatory repository:

Next.js + AI SDK Example

View a complete example with weather agent, tool calling, and feedback integration

Need Help?

If you run into issues:

Build docs developers (and LLMs) love