The @contextcompany/otel package provides automatic instrumentation for Vercel AI SDK via OpenTelemetry. It captures traces from AI SDK spans and exports them to The Context Company for observability.
Installation
npm install @contextcompany/otel
Setup
Register the TCC span processor in your OpenTelemetry configuration:
// instrumentation.ts (or instrumentation.node.ts)
import { registerOTelTCC } from '@contextcompany/otel' ;
export async function register () {
registerOTelTCC ();
}
// index.ts
import { registerOTelTCC } from '@contextcompany/otel' ;
registerOTelTCC ();
// Rest of your application code
API Reference
registerOTelTCC
Registers The Context Company’s OpenTelemetry span processor to automatically capture AI SDK telemetry.
function registerOTelTCC ( options ?: RegisterOpts ) : void
Configuration options for the span processor. TCC API key. If not provided, reads from TCC_API_KEY environment variable.
Custom endpoint URL. Defaults to TCC’s production or development endpoint based on API key prefix.
Custom OpenTelemetry span processor to chain with TCC processor.
Additional Vercel OTEL configuration options to pass through.
Enable debug logging to console.
Enable local-first mode. When true, starts a WebSocket server and uses local span exporter. No API key required if only using local mode.
TCCSpanProcessor
The OpenTelemetry span processor class that filters and exports AI SDK spans.
class TCCSpanProcessor implements SpanProcessor {
constructor ( options ?: {
apiKey ?: string ;
url ?: string ;
baseProcessor ?: SpanProcessor ;
});
onStart ( span : Span , parentContext : Context ) : void ;
onEnd ( span : ReadableSpan ) : void ;
shutdown () : Promise < void >;
forceFlush () : Promise < void >;
}
Configuration options for the span processor. TCC API key. If not provided, reads from TCC_API_KEY environment variable.
Base processor to chain with.
The processor automatically filters spans to only capture those starting with ai. from Vercel AI SDK.
submitFeedback
Submit user feedback for a specific run.
function submitFeedback ( params : {
runId : string ;
score ?: "thumbs_up" | "thumbs_down" ;
text ?: string ;
}) : Promise < Response | undefined >
The unique identifier of the run to attach feedback to.
score
"thumbs_up" | "thumbs_down"
Feedback score as thumbs up or thumbs down. At least one of score or text must be provided.
Text feedback from the user. Maximum 2000 characters. At least one of score or text must be provided.
return
Promise<Response | undefined>
Returns the fetch Response object if successful, or undefined if the request fails or validation errors occur.
Environment Variables
Your Context Company API key. Get one from the dashboard .
Custom ingestion endpoint URL. Overrides the default endpoint.
Enable debug logging. Set to true or 1.
Usage Example
import { registerOTelTCC , submitFeedback } from '@contextcompany/otel' ;
import { generateText } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
// Register once at app startup
registerOTelTCC ({ debug: true });
// Use AI SDK normally - traces are captured automatically
const result = await generateText ({
model: openai ( 'gpt-4' ),
prompt: 'Explain quantum computing in simple terms' ,
});
console . log ( result . text );
// Optionally submit user feedback
await submitFeedback ({
runId: result . runId ,
score: 'thumbs_up' ,
text: 'Great explanation!' ,
});
How It Works
The TCCSpanProcessor registers with OpenTelemetry’s tracing system
It filters spans to only process those from Vercel AI SDK (prefixed with ai.)
Captured spans are batched and exported to The Context Company’s ingestion endpoint
Traces appear in your dashboard with full observability into model calls, tokens, costs, and latency
Next Steps
AI SDK Integration Learn more about AI SDK integration patterns
View Traces View your traces in the dashboard