Skip to main content
import { ZeroEvalCallbackHandler } from 'zeroeval/langchain';
import { ChatOpenAI } from '@langchain/openai';

const model = new ChatOpenAI({
  modelName: 'gpt-4',
  callbacks: [new ZeroEvalCallbackHandler()]
});

const response = await model.invoke([
  { role: 'user', content: 'Hello!' }
]);

Overview

The ZeroEval LangChain integration provides automatic tracing for LangChain and LangGraph applications through a callback handler. The handler captures spans for LLM calls, chains, tools, retrievers, and agents.
The LangChain integration is exported from zeroeval/langchain, not the main SDK export.

ZeroEvalCallbackHandler

A callback handler that implements LangChain’s BaseCallbackHandler interface to trace all operations.

Constructor

class ZeroEvalCallbackHandler extends BaseCallbackHandler {
  constructor(options?: ZeroEvalCallbackHandlerOptions)
}

Options

debug
boolean
default:"false"
Enable debug mode. When true, includes runId and parentRunId in span attributes.
excludeMetadataProps
RegExp
default:"/^(l[sc]_|langgraph_|__pregel_|checkpoint_ns)/"
Regular expression to filter out LangChain/LangGraph internal metadata properties from span attributes.
maxConcurrentSpans
number
default:"1000"
Maximum number of concurrent spans before warnings are issued. Prevents memory issues in high-throughput scenarios.
spanCleanupIntervalMs
number
default:"60000"
Interval in milliseconds for cleaning up orphaned spans (spans that weren’t properly closed).

Methods

destroy()
() => void
Cleans up resources including the cleanup timer and any active spans. Call this when you’re done with the handler.
const handler = new ZeroEvalCallbackHandler();
// ... use handler
handler.destroy();

Traced Operations

The callback handler automatically traces:

LLM Calls

  • Methods: handleLLMStart, handleLLMEnd, handleLLMError
  • Captures: Model name, prompts, completions, token usage, temperature, and other parameters
  • Span kind: llm

Chat Model Calls

  • Methods: handleChatModelStart
  • Captures: Messages, model parameters, token usage, tool calls
  • Span kind: llm

Chains

  • Methods: handleChainStart, handleChainEnd, handleChainError
  • Captures: Chain inputs, outputs, nested chain execution
  • Span kind: chain

Tools

  • Methods: handleToolStart, handleToolEnd, handleToolError
  • Captures: Tool name, input arguments, output results
  • Span kind: tool

Agents

  • Methods: handleAgentAction, handleAgentEnd
  • Captures: Agent actions, tool selection, final outputs
  • Span kind: agent

Retrievers

  • Methods: handleRetrieverStart, handleRetrieverEnd, handleRetrieverError
  • Captures: Queries, retrieved documents, document count
  • Span kind: retriever

Global Callback Handler

Set a callback handler globally to trace all LangChain operations without passing handlers to individual components.

setGlobalCallbackHandler

function setGlobalCallbackHandler(handler: BaseCallbackHandler): void
Registers a callback handler to be used globally across your application.
handler
BaseCallbackHandler
required
The callback handler instance to register globally.
Example:
import { 
  setGlobalCallbackHandler, 
  ZeroEvalCallbackHandler 
} from 'zeroeval/langchain';

setGlobalCallbackHandler(new ZeroEvalCallbackHandler({
  debug: true
}));

clearGlobalHandler

function clearGlobalHandler(): void
Removes the global callback handler. Example:
import { clearGlobalHandler } from 'zeroeval/langchain';

clearGlobalHandler();
After calling clearGlobalHandler(), remember to call destroy() on your handler instance to clean up resources:
const handler = new ZeroEvalCallbackHandler();
setGlobalCallbackHandler(handler);

// Later
clearGlobalHandler();
handler.destroy();

getGlobalHandler

function getGlobalHandler(): BaseCallbackHandler | undefined
Retrieves the currently registered global callback handler, if any. Returns: The global handler instance or undefined if no handler is set. Example:
import { getGlobalHandler, ZeroEvalCallbackHandler } from 'zeroeval/langchain';

const handler = getGlobalHandler();
if (handler instanceof ZeroEvalCallbackHandler) {
  console.log('ZeroEval handler is active');
}

Usage Patterns

Per-Component Callbacks

Pass the handler directly to individual components:
import { ZeroEvalCallbackHandler } from 'zeroeval/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { ChatPromptTemplate } from '@langchain/core/prompts';

const callbacks = [new ZeroEvalCallbackHandler()];

const model = new ChatOpenAI({ callbacks });
const parser = new StringOutputParser();
const prompt = ChatPromptTemplate.fromTemplate('Tell me a joke about {topic}');

const chain = prompt.pipe(model).pipe(parser);

const result = await chain.invoke(
  { topic: 'cats' },
  { callbacks }
);

Global Callbacks

Set once at startup for application-wide tracing:
// app.ts (startup file)
import { 
  setGlobalCallbackHandler, 
  ZeroEvalCallbackHandler 
} from 'zeroeval/langchain';
import * as ze from 'zeroeval';

ze.init({ apiKey: process.env.ZEROEVAL_API_KEY });
setGlobalCallbackHandler(new ZeroEvalCallbackHandler());

// All subsequent LangChain operations are traced
import { ChatOpenAI } from '@langchain/openai';

const model = new ChatOpenAI({ modelName: 'gpt-4' });
const response = await model.invoke([...]);

LangGraph Tracing

Trace LangGraph state machine executions:
import { ZeroEvalCallbackHandler } from 'zeroeval/langchain';
import { StateGraph, END } from '@langchain/langgraph';
import { ChatOpenAI } from '@langchain/openai';

interface State {
  messages: BaseMessage[];
}

const workflow = new StateGraph<State>({
  channels: { messages: { reducer: (x, y) => x.concat(y) } }
});

workflow.addNode('agent', async (state) => {
  const model = new ChatOpenAI({ modelName: 'gpt-4' });
  const response = await model.invoke(state.messages);
  return { messages: [response] };
});

workflow.addEdge('agent', END);
workflow.setEntryPoint('agent');

const app = workflow.compile();

const result = await app.invoke(
  { messages: [{ role: 'user', content: 'Hello' }] },
  { callbacks: [new ZeroEvalCallbackHandler()] }
);

Span Attributes

Spans created by the callback handler include:
type
string
LangChain component type: "llm", "chain", "tool", "agent", or "retriever"
kind
string
For LLM operations, set to "llm"
provider
string
For LLM operations, defaults to "openai"
service.name
string
Service identifier, typically matches provider
model
string
Model name for LLM/chat operations
messages
array
Chat messages for chat model operations
temperature
number
Temperature parameter (if provided)
max_tokens
number
Max tokens parameter (if provided)
tools
array
Tool definitions (if provided)
inputTokens
number
Prompt tokens consumed
outputTokens
number
Completion tokens generated
throughput
number
Tokens per second (calculated from duration and token usage)
runId
string
LangChain run ID (only when debug: true)
parentRunId
string
Parent run ID for nested operations (only when debug: true)

Performance Optimizations

The callback handler includes several optimizations:
  • Object pooling: Reuses metadata objects to reduce allocations
  • Lazy serialization: Defers JSON serialization until needed
  • Orphan cleanup: Automatically closes spans that weren’t properly ended
  • Metadata filtering: Excludes internal LangChain/LangGraph properties via regex
  • Concurrent span limits: Prevents memory issues in high-throughput scenarios

Build docs developers (and LLMs) love