Overview
Observatory is built on OpenTelemetry , the open-source observability framework. This provides:
Standard instrumentation : Compatible with any OpenTelemetry-instrumented library
Vendor neutrality : Your data isn’t locked into a proprietary format
Ecosystem integration : Works with existing OpenTelemetry tools and exporters
AI SDK support : Automatic instrumentation for Vercel AI SDK
Observatory focuses on AI-specific observability, capturing runs, steps, and tool calls from your AI agents.
Architecture
Observatory’s OpenTelemetry integration consists of several key components:
┌─────────────────────────────────────────────────┐
│ Your AI Agent (e.g., Vercel AI SDK) │
│ - generateText(), streamText(), etc. │
└─────────────────┬───────────────────────────────┘
│
│ Emits OpenTelemetry Spans
▼
┌─────────────────────────────────────────────────┐
│ AISDKSpanProcessor │
│ - Filters spans starting with "ai." │
│ - Passes to base processor │
└─────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ RunBatchSpanProcessor │
│ - Groups spans by run ID │
│ - Batches related spans together │
│ - Exports complete runs │
└─────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Exporter (Cloud or Local) │
│ - OTLPHttpJsonTraceExporter → Cloud API │
│ - LocalSpanExporter → WebSocket → Widget │
└─────────────────────────────────────────────────┘
Core Components
TCCSpanProcessor
The main span processor that sends telemetry to Observatory’s cloud API.
import { TCCSpanProcessor } from "@contextcompany/otel" ;
const processor = new TCCSpanProcessor ({
apiKey: "tcc_your_api_key" , // Or use TCC_API_KEY env var
otlpUrl: "https://..." , // Optional: custom endpoint
baseProcessor: customProcessor , // Optional: custom processor
debug: true , // Optional: enable debug logs
});
How it works
API Key Validation : Checks for API key in options or TCC_API_KEY env var
URL Selection : Auto-detects dev/prod based on key prefix (dev_ → dev endpoint)
Exporter Creation : Creates OTLPHttpJsonTraceExporter with auth headers
Processor Setup : Wraps base processor (defaults to RunBatchSpanProcessor)
Span Filtering : Uses AISDKSpanProcessor to only process AI SDK spans
export class TCCSpanProcessor implements SpanProcessor {
constructor ( options : TCCSpanProcessorOptions = {}) {
const apiKey = options . apiKey || getTCCApiKey ();
if ( ! apiKey ) throw new Error ( "Missing API key" );
const url = options . otlpUrl ?? getTCCUrl (
apiKey ,
"https://api.thecontext.company/v1/traces" ,
"https://dev.thecontext.company/v1/traces"
);
const exporter = new OTLPHttpJsonTraceExporter ({
url ,
headers: { Authorization: `Bearer ${ apiKey } ` },
});
const baseProcessor = options . baseProcessor ??
new RunBatchSpanProcessor ( exporter );
this . processor = new AISDKSpanProcessor ( baseProcessor );
}
}
AISDKSpanProcessor
Filters spans to only process those from the AI SDK.
export class AISDKSpanProcessor implements SpanProcessor {
onStart ( span : Span , parentContext : Context ) : void {
if ( span . name . startsWith ( "ai." )) {
this . processor . onStart ( span , parentContext );
}
}
onEnd ( span : ReadableSpan ) : void {
if ( span && span . name . startsWith ( "ai." )) {
this . processor . onEnd ( span );
}
}
}
Filtered span names:
ai.generateText
ai.streamText
ai.generateObject
ai.streamObject
ai.toolCall
And other AI SDK operations
RunBatchSpanProcessor
Groups related spans by run ID and batches them for export.
src/RunBatchSpanProcessor.ts
export class RunBatchSpanProcessor implements SpanProcessor {
private spanIdToRunId = new Map < string , string >();
private batches = new Map < RunId , Batch >();
private batchTimeouts = new Map < RunId , NodeJS . Timeout >();
onStart ( span : Span , _parentContext : Context ) : void {
const spanType = getSpanType ( span );
if ( spanType === "run" ) {
const runId = getRunIdFromSpanMetadata ( span ) ?? crypto . randomUUID ();
span . setAttribute ( "tcc.runId" , runId );
this . spanIdToRunId . set ( span . spanContext (). spanId , runId );
} else if ( spanType === "step" || spanType === "tool_call" ) {
const parentSpanId = span . parentSpanContext ?. spanId ;
const runId = this . spanIdToRunId . get ( parentSpanId );
if ( runId ) span . setAttribute ( "tcc.runId" , runId );
}
}
onEnd ( span : ReadableSpan ) : void {
const runId = this . spanIdToRunId . get ( span . spanContext (). spanId );
if ( ! runId ) return ;
this . addToBatch ( runId , span );
// Export immediately when run span ends
if ( getSpanType ( span ) === "run" ) {
this . exportBatch ( runId );
}
}
}
Batching Strategy
Run Detection : Identifies “run” spans (top-level AI operations)
Run ID Assignment : Generates or extracts unique run ID
Hierarchy Tracking : Maps child spans to parent run
Automatic Export : Exports batch when run completes
Timeout Fallback : Exports after 10 minutes if run doesn’t complete
LocalSpanExporter
Exports spans to local WebSocket for development.
src/nextjs/local/LocalSpanExporter.ts
export class LocalSpanExporter extends EventEmitter implements SpanExporter {
private _dataStore : DataStore = {};
private _subscribers = new Set < LocalCallback >();
export ( spans : ReadableSpan [], callback : ( result : ExportResult ) => void ) : void {
const { runs , steps , toolCalls } = shapeSpansIntoRuns ( spans );
this . _upsertItemsToStore ({ runs , steps , toolCalls });
// Notify all subscribers (e.g., widget)
this . _subscribers . forEach (( cb ) => cb ({ runs , steps , toolCalls }));
callback ({ code: ExportResultCode . SUCCESS });
}
subscribe ( callback : LocalCallback ) : () => void {
this . _subscribers . add ( callback );
return () => this . _subscribers . delete ( callback );
}
}
Next.js Integration
registerOTelTCC()
One-function setup for Next.js applications.
src/nextjs/instrumentation.ts
export function registerOTelTCC ( opts : RegisterOpts = {}) {
if ( process . env . NEXT_RUNTIME !== "nodejs" ) return ;
const spanProcessors = [];
const apiKey = opts . apiKey ?? getTCCApiKey ();
// Local mode setup
if ( opts . local ) {
startWebSocketServer ();
spanProcessors . push ( tccLocalSpanProcessor ());
// Local-only mode (no API key)
if ( ! apiKey ) {
return registerOTel ({ spanProcessors , ... opts . config });
}
}
// Cloud mode setup
if ( apiKey ) {
const tccSpanProcessor = new TCCSpanProcessor ({
apiKey ,
otlpUrl: opts . url ,
baseProcessor: opts . baseProcessor ,
debug: opts . debug ,
});
spanProcessors . push ( tccSpanProcessor );
}
return registerOTel ({ spanProcessors , ... opts . config });
}
Usage in instrumentation.ts
export async function register () {
if ( process . env . NEXT_RUNTIME === "nodejs" ) {
const { registerOTelTCC } = await import ( "@contextcompany/otel/nextjs" );
// Cloud only
registerOTelTCC ();
// Local only
registerOTelTCC ({ local: true });
// Both cloud and local
registerOTelTCC ({
local: true ,
apiKey: process . env . TCC_API_KEY
});
}
}
Configuration
Environment Variables
Your Observatory API key. Keys starting with dev_ use the development endpoint. TCC_API_KEY = tcc_live_abc123
TCC_API_KEY = dev_xyz789
Override the default OTLP endpoint URL. TCC_URL = https://custom.endpoint.com/v1/traces
Next.js runtime identifier. Must be "nodejs" for instrumentation to run. This is automatically set by Next.js.
TCCSpanProcessor Options
interface TCCSpanProcessorOptions {
apiKey ?: string ; // Override TCC_API_KEY
otlpUrl ?: string ; // Override TCC_URL
baseProcessor ?: SpanProcessor ; // Custom base processor
debug ?: boolean ; // Enable debug logging
}
registerOTelTCC Options
interface RegisterOpts {
url ?: string ; // OTLP endpoint URL
apiKey ?: string ; // Observatory API key
baseProcessor ?: SpanProcessor ; // Custom base processor
config ?: Partial < Configuration >; // @vercel/otel config
debug ?: boolean ; // Enable debug logging
local ?: boolean ; // Enable local mode
}
Span Types
Observatory categorizes spans into three types:
Run Spans
Top-level AI operations:
ai.generateText
ai.streamText
ai.generateObject
ai.streamObject
Attributes:
tcc.runId: Unique run identifier
ai.model.id: Model used (e.g., gpt-4)
ai.usage.promptTokens: Input tokens
ai.usage.completionTokens: Output tokens
Step Spans
Intermediate reasoning steps within a run:
Multi-step agent reasoning
Chain-of-thought steps
Agentic loops
Attributes:
tcc.runId: Parent run ID
ai.step.index: Step number
ai.step.type: Step type
Individual tool invocations:
ai.toolCall
ai.toolCall.{toolName}
Attributes:
tcc.runId: Parent run ID
ai.toolCall.name: Tool name
ai.toolCall.args: Tool arguments (JSON)
ai.toolCall.result: Tool result (JSON)
Custom Processors
You can provide your own span processor:
import { SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base" ;
import { TCCSpanProcessor } from "@contextcompany/otel" ;
import { CustomExporter } from "./custom-exporter" ;
const customExporter = new CustomExporter ();
const customProcessor = new SimpleSpanProcessor ( customExporter );
const tccProcessor = new TCCSpanProcessor ({
apiKey: "tcc_your_key" ,
baseProcessor: customProcessor ,
});
Multiple Exporters
Send spans to multiple destinations:
import { registerOTelTCC } from "@contextcompany/otel/nextjs" ;
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base" ;
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http" ;
// Custom exporter for your own backend
const customExporter = new OTLPTraceExporter ({
url: "https://your-backend.com/v1/traces" ,
});
const customProcessor = new BatchSpanProcessor ( customExporter );
registerOTelTCC ({
apiKey: process . env . TCC_API_KEY ,
config: {
spanProcessors: [ customProcessor ], // Additional processors
},
});
Exporters
OTLPHttpJsonTraceExporter
HTTP exporter using JSON encoding:
src/exporters/json/OTLPHttpJsonTraceExporter.ts
export class OTLPHttpJsonTraceExporter implements SpanExporter {
constructor ( config : {
url : string ;
headers ?: Record < string , string >;
}) {
this . url = config . url ;
this . headers = config . headers ;
}
export ( spans : ReadableSpan [], callback : ( result : ExportResult ) => void ) : void {
fetch ( this . url , {
method: "POST" ,
headers: {
"Content-Type" : "application/json" ,
... this . headers ,
},
body: JSON . stringify ( this . convertSpans ( spans )),
})
. then (( response ) => {
if ( response . ok ) {
callback ({ code: ExportResultCode . SUCCESS });
} else {
callback ({ code: ExportResultCode . FAILED });
}
})
. catch (( error ) => {
callback ({
code: ExportResultCode . FAILED ,
error
});
});
}
}
LocalSpanExporter
See Local Mode for details.
Debug Mode
Enable detailed logging:
import { registerOTelTCC } from "@contextcompany/otel/nextjs" ;
registerOTelTCC ({ debug: true });
Debug output:
[TCC] Using OTLP URL: https://api.thecontext.company/v1/traces
[TCC] Using environments: ["production"].
[TCC] Began AI SDK span: ai.generateText
[TCC] Ended AI SDK span: ai.generateText
[TCC] RunBatchSpanProcessor: Sending batch run-123 to exporter
Best Practices
Use environment variables for API keys
Never hardcode API keys in your source code: // ❌ Don't do this
registerOTelTCC ({ apiKey: "tcc_live_abc123" });
// ✅ Do this
registerOTelTCC (); // Uses TCC_API_KEY from env
Enable debug mode during setup
Debug mode helps diagnose configuration issues: registerOTelTCC ({
debug: process . env . NODE_ENV === "development"
});
Use local mode for development
Avoid sending test data to production: registerOTelTCC ({
local: process . env . NODE_ENV === "development" ,
apiKey: process . env . NODE_ENV === "production"
? process . env . TCC_API_KEY
: undefined
});
Separate dev and prod keys
Use different API keys for each environment: # .env.development
TCC_API_KEY = dev_your_dev_key
# .env.production
TCC_API_KEY = tcc_live_your_prod_key
Advanced Usage
Custom Span Processor
Implement your own span processing logic:
import { SpanProcessor , ReadableSpan , Span } from "@opentelemetry/sdk-trace-base" ;
import { Context } from "@opentelemetry/api" ;
class CustomSpanProcessor implements SpanProcessor {
onStart ( span : Span , parentContext : Context ) : void {
// Add custom attributes
span . setAttribute ( "custom.attr" , "value" );
}
onEnd ( span : ReadableSpan ) : void {
// Process completed span
console . log ( `Span ${ span . name } completed` );
}
shutdown () : Promise < void > {
return Promise . resolve ();
}
forceFlush () : Promise < void > {
return Promise . resolve ();
}
}
registerOTelTCC ({
baseProcessor: new CustomSpanProcessor (),
});
Sampling
Control which spans are exported:
import { ParentBasedSampler , TraceIdRatioBasedSampler } from "@opentelemetry/sdk-trace-base" ;
registerOTelTCC ({
config: {
sampler: new ParentBasedSampler ({
root: new TraceIdRatioBasedSampler ( 0.1 ), // Sample 10% of traces
}),
},
});
Resource Attributes
Add metadata to all spans:
import { Resource } from "@opentelemetry/resources" ;
import { SemanticResourceAttributes } from "@opentelemetry/semantic-conventions" ;
registerOTelTCC ({
config: {
resource: new Resource ({
[SemanticResourceAttributes. SERVICE_NAME ]: "my-ai-agent" ,
[SemanticResourceAttributes. SERVICE_VERSION ]: "1.0.0" ,
[SemanticResourceAttributes. DEPLOYMENT_ENVIRONMENT ]: "production" ,
}),
},
});
Troubleshooting
Spans not appearing
Check API key : Verify TCC_API_KEY is set correctly
Enable debug mode : See what’s being processed
Verify runtime : Ensure NEXT_RUNTIME === "nodejs"
Check span names : Only ai.* spans are exported
Use RunBatchSpanProcessor for batching (default)
Avoid synchronous exporters in production
Consider sampling for high-volume applications
Memory leaks
Ensure exporters are properly shut down
Clear batch timeouts on shutdown
Use singleton pattern for exporters in local mode
Next Steps
Local Mode Set up local-first development
Feedback Collect user feedback on agent runs