Skip to main content
import * as ai from 'ai';
import { openai } from '@ai-sdk/openai';
import { wrapVercelAI } from 'zeroeval';

const wrappedAI = wrapVercelAI(ai);

const { text } = await wrappedAI.generateText({
  model: openai('gpt-4'),
  prompt: 'Explain quantum computing'
});

Overview

wrapVercelAI() wraps the Vercel AI SDK module to automatically trace all AI operations. The wrapper intercepts SDK functions like generateText, streamText, generateObject, and embed, adding observability without changing your code. If ze.init() hasn’t been called and ZEROEVAL_API_KEY is set in your environment, the SDK will automatically initialize itself.

Type Signature

function wrapVercelAI<T extends Record<string, any>>(
  aiModule: T
): WrappedVercelAI<T>

Parameters

aiModule
Record<string, any>
required
The Vercel AI SDK module exports (typically import * as ai from 'ai').

Returns

WrappedVercelAI<T>
T & { __zeroeval_wrapped?: boolean }
A wrapped version of the AI SDK with all supported functions instrumented for tracing.

Traced Functions

Text Generation

Functions: generateText(), streamText() Traces text generation with:
  • Input prompt or messages
  • Generated text output
  • Token usage metrics
  • Throughput calculation
  • Streaming metrics (latency to first token)
  • Tool call information

Object Generation

Functions: generateObject(), streamObject() Traces structured object generation with:
  • Input prompt and schema
  • Generated object
  • Token usage metrics

Embeddings

Functions: embed(), embedMany() Traces embedding generation with:
  • Input text
  • Number of embeddings generated
  • Token usage metrics

Additional Functions

Functions: generateImage(), transcribe(), generateSpeech() Traces image generation, audio transcription, and speech synthesis operations.

Vercel AI SDK Integration

The wrapper creates traced versions of SDK functions while preserving all original functionality:
import * as ai from 'ai';
import { wrapVercelAI } from 'zeroeval';

const wrappedAI = wrapVercelAI(ai);

// All these functions are traced:
wrappedAI.generateText     // ✓ Traced
wrappedAI.streamText       // ✓ Traced
wrappedAI.generateObject   // ✓ Traced
wrappedAI.streamObject     // ✓ Traced
wrappedAI.embed            // ✓ Traced
wrappedAI.embedMany        // ✓ Traced

// Non-traced exports are passed through:
wrappedAI.openai           // ✓ Available unchanged
wrappedAI.anthropic        // ✓ Available unchanged

Streaming Support

The wrapper fully supports streaming responses from streamText() and streamObject():
const result = await wrappedAI.streamText({
  model: openai('gpt-4'),
  prompt: 'Write a story'
});

// Accessing textStream triggers wrapper
for await (const chunk of result.textStream) {
  // Chunks are traced as they arrive
}
// Span ends when stream completes
The wrapper intercepts stream access using a Proxy and tracks:
  • Time to first token (latency)
  • Chunk count
  • Full accumulated text
  • Token usage from stream metadata

Metadata Extraction

The wrapper processes ZeroEval metadata embedded in prompts or system messages:
const { text } = await wrappedAI.generateText({
  model: openai('gpt-4'),
  messages: [
    {
      role: 'system',
      content: `
<!--zeroeval
task: summarization
prompt_version_id: pv_xyz789
variables:
  max_length: 100
-->
Summarize the following in {{max_length}} words or less.
      `
    },
    { role: 'user', content: 'Long article text...' }
  ]
});
The wrapper:
  1. Extracts metadata from HTML comments
  2. Interpolates template variables
  3. Removes metadata before passing to the AI SDK
  4. Attaches metadata to the trace span

Double-Wrap Protection

Wrapping an already-wrapped module is safe and returns the existing wrapper:
const wrapped1 = wrapVercelAI(ai);
const wrapped2 = wrapVercelAI(wrapped1); // Returns wrapped1

Error Tracing

Errors from AI SDK operations are automatically captured:
try {
  await wrappedAI.generateText({
    model: openai('invalid-model'),
    prompt: 'Hello'
  });
} catch (error) {
  // Error is traced with code, message, and stack
}

Span Attributes

Each traced operation includes:
service.name
string
Set to "vercel-ai-sdk"
kind
string
Operation kind: "llm", "embedding", "image", "speech", or "transcription"
provider
string
Set to "vercel-ai-sdk"
model
string
The model ID or provider-specific identifier
messages
array
Messages array (for message-based generation)
streaming
boolean
Set to true for streamText() and streamObject()
temperature
number
Temperature parameter (if provided)
maxTokens
number
Max tokens parameter (if provided)
maxSteps
number
Max agent steps (if provided)
toolCount
number
Number of tools provided
inputTokens
number
Prompt tokens consumed
outputTokens
number
Completion tokens generated
throughput
number
Characters per second
latency
number
Time to first token (streaming only)
chunkCount
number
Number of chunks received (streaming only)
zeroeval
object
Extracted ZeroEval metadata (task, prompt_version_id, variables)

Supported Response Types

The wrapper handles all Vercel AI SDK response types:
  • Non-streaming: Traces complete response with usage metrics
  • Streaming: Wraps textStream and fullStream iterators
  • Object generation: Captures structured output from schema validation
  • Embeddings: Records embedding count and dimensions
  • wrap() - Auto-detect and wrap any supported client
  • wrapOpenAI() - OpenAI-specific wrapper

Build docs developers (and LLMs) love