The Vercel AI SDK integration wraps AI SDK functions to automatically trace text generation, object generation, embeddings, and more.
Installation
npm install ai @ai-sdk/openai zeroeval
Basic usage
Wrap the entire ai module with wrapVercelAI() or the auto-detecting wrap() function:
import * as ai from 'ai';
import { openai } from '@ai-sdk/openai';
import { wrapVercelAI } from 'zeroeval';
const wrappedAI = wrapVercelAI(ai);
// All AI SDK calls are now automatically traced
const { text } = await wrappedAI.generateText({
model: openai('gpt-4o-mini'),
prompt: 'Write a haiku about TypeScript.'
});
If ZEROEVAL_API_KEY is set in your environment, the SDK will automatically initialize. Otherwise, call ze.init({ apiKey: 'your-key' }) before using the wrapper.
What gets traced
The Vercel AI wrapper automatically captures:
Text generation
- Input prompt or messages
- Model name and parameters (temperature, maxTokens, etc.)
- Response text
- Token usage (prompt tokens, completion tokens)
- Latency (time to first token for streaming)
- Throughput (characters per second)
- Tool usage
Object generation
- Input prompt and schema
- Model name and parameters
- Generated object (JSON)
- Token usage
Embeddings
- Input text or array of texts
- Model name
- Number of embeddings generated
- Token usage
Supported functions
The wrapper traces the following AI SDK functions:
| Function | Traced As | Kind |
|---|
generateText | vercelai.generateText | llm |
streamText | vercelai.streamText | llm |
generateObject | vercelai.generateObject | llm |
streamObject | vercelai.streamObject | llm |
embed | vercelai.embed | embedding |
embedMany | vercelai.embedMany | embedding |
generateImage | vercelai.generateImage | image |
generateSpeech | vercelai.generateSpeech | speech |
transcribe | vercelai.transcribe | transcription |
Text generation
Generate text with automatic tracing:
import * as ai from 'ai';
import { openai } from '@ai-sdk/openai';
import * as ze from 'zeroeval';
const wrappedAI = ze.wrap(ai);
const { text, usage } = await wrappedAI.generateText({
model: openai('gpt-4o-mini'),
prompt: 'Explain async/await in JavaScript.',
temperature: 0.7,
maxTokens: 200
});
console.log(text);
console.log('Tokens used:', usage);
Streaming text
Streaming is fully supported with automatic metric capture:
const { textStream } = await wrappedAI.streamText({
model: openai('gpt-4o-mini'),
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Count from 1 to 5' }
]
});
for await (const chunk of textStream) {
process.stdout.write(chunk);
}
Streaming metrics captured:
- Time to first token (latency)
- Total response time
- Throughput (characters per second)
- Full accumulated response text
- Token usage (from usage chunks)
- Chunk count
The wrapper supports both textStream and fullStream iterators, as well as toDataStreamResponse(), toDataStream(), and legacy consumeStream() methods.
Generating objects
Generate structured objects with JSON schema:
const { object } = await wrappedAI.generateObject({
model: openai('gpt-4o-mini'),
schema: ai.jsonSchema({
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' },
hobbies: { type: 'array', items: { type: 'string' } }
},
required: ['name', 'age', 'hobbies']
}),
prompt: 'Generate a person profile for a software developer.'
});
console.log(object);
The generated object is captured in the span as JSON.
Embeddings
Create embeddings with automatic tracing:
import { openai } from '@ai-sdk/openai';
const { embedding } = await wrappedAI.embed({
model: openai.embedding('text-embedding-3-small'),
value: 'TypeScript is a typed superset of JavaScript.'
});
console.log('Embedding dimensions:', embedding.length);
Tools are automatically traced with full context:
const { text, toolCalls, toolResults } = await wrappedAI.generateText({
model: openai('gpt-4o-mini'),
prompt: 'What is the weather in San Francisco?',
tools: {
getWeather: ai.tool({
description: 'Get the weather for a location',
parameters: ai.jsonSchema({
type: 'object',
properties: {
location: { type: 'string' }
},
required: ['location']
}),
execute: async (args) => {
const { location } = args as { location: string };
return {
location,
temperature: 72,
condition: 'sunny'
};
}
})
},
maxSteps: 3
});
console.log('Response:', text);
console.log('Tool calls:', toolCalls);
The span attributes include toolCount and tool usage details.
The wrapper supports both prompt and messages input:
// Prompt-based
const result1 = await wrappedAI.generateText({
model: openai('gpt-4o-mini'),
prompt: 'Hello!'
});
// Message-based
const result2 = await wrappedAI.generateText({
model: openai('gpt-4o-mini'),
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello!' }
]
});
Both are traced with full input/output capture.
The wrapper automatically extracts ZeroEval metadata from prompts and messages:
const { text } = await wrappedAI.generateText({
model: openai('gpt-4o-mini'),
prompt: `<zeroeval task="greeting" variables='{"name":"Alice"}'>Greet {{name}}.</zeroeval>`
});
The wrapper will:
- Extract metadata (
task, variables)
- Strip the
<zeroeval> tags
- Interpolate variables like
{{name}}
- Attach metadata to the span
See the Prompts guide for more details.
Using with AI SDK UI
The wrapper works seamlessly with AI SDK UI patterns:
import { streamText } from 'ai';
import * as ze from 'zeroeval';
const wrappedAI = ze.wrap({ streamText });
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await wrappedAI.streamText({
model: openai('gpt-4o-mini'),
messages
});
return result.toDataStreamResponse();
}
Error handling
Errors are automatically captured in spans:
try {
await wrappedAI.generateText({
model: openai('invalid-model'),
prompt: 'Hello'
});
} catch (error) {
// Error is traced with code, message, and stack trace
console.error(error);
}
Example
Here’s a complete example from the SDK repository:
import * as ai from 'ai';
import { openai } from '@ai-sdk/openai';
import * as ze from 'zeroeval';
const wrappedAI = ze.wrap(ai);
// Generate text
const { text, usage } = await wrappedAI.generateText({
model: openai('gpt-4o-mini'),
prompt: 'Write a haiku about TypeScript.',
temperature: 0.7,
maxTokens: 100
});
console.log(text);
// Stream text
const { textStream } = await wrappedAI.streamText({
model: openai('gpt-4o-mini'),
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Count from 1 to 5' }
]
});
for await (const chunk of textStream) {
process.stdout.write(chunk);
}
// Generate object
const { object } = await wrappedAI.generateObject({
model: openai('gpt-4o-mini'),
schema: ai.jsonSchema({
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' },
hobbies: { type: 'array', items: { type: 'string' } }
},
required: ['name', 'age', 'hobbies']
}),
prompt: 'Generate a person profile.'
});
console.log(object);
// Embeddings
const { embedding } = await wrappedAI.embed({
model: openai.embedding('text-embedding-3-small'),
value: 'TypeScript is great.'
});
console.log('Dimensions:', embedding.length);
API reference
wrapVercelAI(aiModule)
Wraps Vercel AI SDK module exports to automatically trace all function calls.
Parameters:
aiModule - The AI SDK module (e.g., import * as ai from 'ai')
Returns:
Wrapped module with the same exports and types.
Example:
import * as ai from 'ai';
import { wrapVercelAI } from 'zeroeval';
const wrappedAI = wrapVercelAI(ai);
You can also use the auto-detecting wrap() function:const wrappedAI = ze.wrap(ai);
Next steps