Overview
wrapOpenAI() creates a proxy-based wrapper around your OpenAI client that automatically traces all API calls. The wrapper preserves all TypeScript types and client functionality while instrumenting methods for observability.
If ze.init() hasn’t been called and ZEROEVAL_API_KEY is set in your environment, the SDK will automatically initialize itself.
Type Signature
Parameters
An instance of the OpenAI client from the
openai package.Returns
A wrapped OpenAI client that preserves all original types and functionality while adding automatic tracing.
Traced Operations
Chat Completions
Method:client.chat.completions.create()
Traces both streaming and non-streaming chat completions with:
- Full input/output capture
- Token usage metrics (
inputTokens,outputTokens) - Throughput calculation (chars/second)
- Streaming metrics (latency to first token)
- Prompt metadata extraction and variable interpolation
Embeddings
Method:client.embeddings.create()
Traces embedding generation with:
- Input text capture
- Model information
- Embedding dimension and count
Images
Methods:client.images.generate()client.images.edit()client.images.createVariation()
Audio
Methods:client.audio.transcriptions.create()client.audio.translations.create()
Proxy-Based Instrumentation
The wrapper uses JavaScript Proxies to intercept method calls without modifying the original client:- No monkey-patching or prototype modification
- Full type preservation
- No interference with OpenAI SDK internals
Streaming Support
wrapOpenAI() fully supports streaming responses. The wrapper:
- Detects when
stream: trueis set - Wraps the async iterator returned by OpenAI
- Captures chunks as they arrive
- Records latency to first token
- Calculates throughput after completion
- Extracts usage information from final chunk (when available)
Metadata Extraction
The wrapper automatically processes ZeroEval metadata embedded in system messages:- Extracts metadata from the HTML comment
- Interpolates variables in the prompt template
- Removes the metadata comment before sending to OpenAI
- Attaches metadata to the trace span
Double-Wrap Protection
CallingwrapOpenAI() on an already-wrapped client returns the existing wrapper:
Error Tracing
API errors are automatically captured and attached to spans:Span Attributes
Each traced operation includes:Set to
"openai"Operation kind:
"llm", "embedding", or "operation"Set to
"openai"The model name used in the request
Serialized messages (for chat completions)
Whether the request used streaming
Prompt tokens consumed (from usage data)
Completion tokens generated (from usage data)
Characters per second for the completion
Time to first token (streaming only)
Extracted ZeroEval metadata (task, prompt_version_id, variables)
Related Functions
wrap()- Auto-detect and wrap any supported clientwrapVercelAI()- Vercel AI SDK wrapper