This feature is in beta and may change in future releases.
The wrapGemini() function wraps a Google Gemini client to enable automatic LangSmith tracing for all content generation.
Installation
npm install langsmith @google/genai
Basic usage
import { GoogleGenAI } from "@google/genai";
import { wrapGemini } from "langsmith/wrappers/gemini";
const client = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const wrapped = wrapGemini(client);
const response = await wrapped.models.generateContent({
model: "gemini-2.0-flash-exp",
contents: "Hello!",
});
Signature
function wrapGemini<T extends GoogleGenAIType>(
gemini: T,
options?: Partial<RunTreeConfig>
): PatchedGeminiClient<T>
A Google GenAI client instance to wrap.
LangSmith tracing options (same as other wrappers).
The wrapped Gemini client with automatic tracing.
Supported methods
The wrapper automatically traces:
models.generateContent() - Content generation (non-streaming)
models.generateContentStream() - Streaming content generation
Features
The wrapper automatically extracts:
- Provider (“google”)
- Model name
- Temperature
- Max tokens
- Usage metadata (input/output tokens, cached tokens, reasoning tokens)
Gemini’s format is automatically transformed to a standardized message format:
const response = await wrapped.models.generateContent({
model: "gemini-2.0-flash-exp",
contents: [
{
role: "user",
parts: [{ text: "What is LangSmith?" }],
},
],
});
// In LangSmith, appears as:
// messages: [
// { role: "user", content: "What is LangSmith?" }
// ]
Multimodal support
Images and other media are automatically handled:
const response = await wrapped.models.generateContent({
model: "gemini-2.0-flash-exp",
contents: [
{
role: "user",
parts: [
{ text: "What's in this image?" },
{
inlineData: {
mimeType: "image/jpeg",
data: base64ImageData,
},
},
],
},
],
});
// Images are preserved in the trace
Streaming support
const stream = await wrapped.models.generateContentStream({
model: "gemini-2.0-flash-exp",
contents: "Count to 5",
});
for await (const chunk of stream) {
process.stdout.write(chunk.text || "");
}
Token usage is automatically tracked:
const response = await wrapped.models.generateContent({
model: "gemini-2.0-flash-exp",
contents: "Hello!",
});
// Usage metadata automatically includes:
// - input_tokens (promptTokenCount)
// - output_tokens (candidatesTokenCount)
// - total_tokens
// - cache_read (cachedContentTokenCount)
// - reasoning tokens (thoughtsTokenCount for thinking models)
Image generation
Supports image generation with gemini-2.5-flash-image:
const response = await wrapped.models.generateContent({
model: "gemini-2.5-flash-image",
contents: "A beautiful sunset over mountains",
});
// Image outputs are captured in the trace
Function calling
const response = await wrapped.models.generateContent({
model: "gemini-2.0-flash-exp",
contents: "What's the weather in Paris?",
config: {
tools: [
{
functionDeclarations: [
{
name: "get_weather",
description: "Get weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string" },
},
},
},
],
},
],
},
});
const functionCall = response.candidates[0].content.parts.find(
(p) => "functionCall" in p
);
Complete example
import { GoogleGenAI } from "@google/genai";
import { wrapGemini } from "langsmith/wrappers/gemini";
const client = new GoogleGenAI({
apiKey: process.env.GEMINI_API_KEY
});
const wrapped = wrapGemini(client, {
project_name: "my-gemini-project",
tags: ["production"],
});
// Text generation
const response = await wrapped.models.generateContent({
model: "gemini-2.0-flash-exp",
contents: "What is LangSmith?",
config: {
temperature: 0.7,
maxOutputTokens: 1000,
},
});
console.log(response.text);
// Streaming
const stream = await wrapped.models.generateContentStream({
model: "gemini-2.0-flash-exp",
contents: "Count to 10",
});
for await (const chunk of stream) {
process.stdout.write(chunk.text || "");
}
// Multimodal with image
const imageResponse = await wrapped.models.generateContent({
model: "gemini-2.0-flash-exp",
contents: [
{
role: "user",
parts: [
{ text: "Describe this image" },
{
inlineData: {
mimeType: "image/jpeg",
data: base64ImageData,
},
},
],
},
],
});
console.log(imageResponse.text);
Thinking models
Supports thinking/reasoning models with thought tracking:
const response = await wrapped.models.generateContent({
model: "gemini-2.0-flash-thinking-exp",
contents: "Solve this complex problem...",
});
// Thought content is tracked separately in usage metadata
Nested tracing
import { traceable } from "langsmith/traceable";
const myChain = traceable(
async (input: string) => {
const response = await wrapped.models.generateContent({
model: "gemini-2.0-flash-exp",
contents: input,
});
return response.text;
},
{ name: "my-chain", run_type: "chain" }
);
await myChain("Hello!");
Notes
- The wrapper preserves all original Google GenAI SDK functionality
- Wrapping a client multiple times will throw an error
- All traced calls use
run_type: "llm"
- This is a beta feature and may change in future releases
- Supports text generation, multimodal inputs, image generation, and function calling