Available Instrumentations
OpenInference provides automatic instrumentation for popular LLM frameworks and providers in JavaScript:
OpenAI Auto-instrument OpenAI SDK calls
LangChain Auto-instrument LangChain.js applications
Anthropic Auto-instrument Anthropic SDK
AWS Bedrock Auto-instrument AWS Bedrock calls
Instrumentation Packages
Package Version Description Requires Manual Setup @arizeai/openinference-instrumentation-openaiOpenAI SDK instrumentation No @arizeai/openinference-instrumentation-langchainLangChain.js (v1.x) instrumentation Yes @arizeai/openinference-instrumentation-langchain-v0LangChain.js (v0.x) instrumentation Yes @arizeai/openinference-instrumentation-anthropicAnthropic SDK instrumentation No @arizeai/openinference-instrumentation-bedrockAWS Bedrock instrumentation No @arizeai/openinference-instrumentation-bedrock-agent-runtimeAWS Bedrock Agent Runtime No @arizeai/openinference-instrumentation-beeaiBeeAI framework instrumentation No @arizeai/openinference-instrumentation-claude-agent-sdkClaude Agent SDK instrumentation No @arizeai/openinference-instrumentation-mcpMCP (Model Context Protocol) No
Installation
Install the OpenTelemetry SDK and your chosen instrumentation:
npm install --save \
@opentelemetry/sdk-trace-node \
@opentelemetry/instrumentation \
@arizeai/openinference-instrumentation-openai
Basic Usage
Most instrumentations use the standard OpenTelemetry registration pattern:
import { registerInstrumentations } from "@opentelemetry/instrumentation" ;
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node" ;
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai" ;
const provider = new NodeTracerProvider ();
provider . register ();
registerInstrumentations ({
instrumentations: [ new OpenAIInstrumentation ()],
});
Load instrumentation before importing your application code: node -r ./instrumentation.js ./app.js
Manual Instrumentation Required
Some frameworks require manual instrumentation due to their module structure:
LangChain.js
LangChain must be manually instrumented:
import { LangChainInstrumentation } from "@arizeai/openinference-instrumentation-langchain" ;
import * as lcCallbackManager from "@langchain/core/callbacks/manager" ;
const lcInstrumentation = new LangChainInstrumentation ();
lcInstrumentation . manuallyInstrument ( lcCallbackManager );
LangChain v0.x
For LangChain 0.x versions:
import { LangChainInstrumentation } from "@arizeai/openinference-instrumentation-langchain-v0" ;
import * as langchain from "langchain/callbacks" ;
const lcInstrumentation = new LangChainInstrumentation ();
lcInstrumentation . manuallyInstrument ( langchain );
Multiple Instrumentations
You can register multiple instrumentations simultaneously:
import { registerInstrumentations } from "@opentelemetry/instrumentation" ;
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai" ;
import { AnthropicInstrumentation } from "@arizeai/openinference-instrumentation-anthropic" ;
import { BedrockInstrumentation } from "@arizeai/openinference-instrumentation-bedrock" ;
registerInstrumentations ({
instrumentations: [
new OpenAIInstrumentation (),
new AnthropicInstrumentation (),
new BedrockInstrumentation (),
],
});
Configuration Options
Each instrumentation can be configured with trace config options:
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai" ;
registerInstrumentations ({
instrumentations: [
new OpenAIInstrumentation ({
traceConfig: {
hideInputs: false ,
hideOutputs: false ,
hideInputMessages: false ,
hideOutputMessages: false ,
hideInputImages: true ,
hideEmbeddingVectors: false ,
base64ImageMaxLength: 32000 ,
},
}),
],
});
Common Configuration Options
Option Type Default Description hideInputsbooleanfalseHide all input values hideOutputsbooleanfalseHide all output values hideInputMessagesbooleanfalseHide LLM input messages hideOutputMessagesbooleanfalseHide LLM output messages hideInputImagesbooleanfalseHide input images hideInputTextbooleanfalseHide input text hideOutputTextbooleanfalseHide output text hideEmbeddingVectorsbooleanfalseHide embedding vectors hidePromptsbooleanfalseHide prompt templates base64ImageMaxLengthnumber32000Max base64 image length
Environment Variables
Configure instrumentation behavior via environment variables:
OPENINFERENCE_HIDE_INPUTS = false
OPENINFERENCE_HIDE_OUTPUTS = false
OPENINFERENCE_HIDE_INPUT_MESSAGES = false
OPENINFERENCE_HIDE_OUTPUT_MESSAGES = false
OPENINFERENCE_HIDE_INPUT_IMAGES = true
OPENINFERENCE_HIDE_INPUT_TEXT = false
OPENINFERENCE_HIDE_OUTPUT_TEXT = false
OPENINFERENCE_HIDE_EMBEDDING_VECTORS = false
OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH = 32000
OPENINFERENCE_HIDE_PROMPTS = false
Suppressing Tracing
All instrumentations respect the OpenTelemetry isTracingSuppressed() flag:
import { context } from "@opentelemetry/api" ;
import { suppressTracing } from "@opentelemetry/core" ;
import OpenAI from "openai" ;
const client = new OpenAI ();
// This call will not be traced
context . with ( suppressTracing ( context . active ()), async () => {
await client . chat . completions . create ({
model: "gpt-4o-mini" ,
messages: [{ role: "user" , content: "Hello" }],
});
});
Context Propagation
All instrumentations automatically propagate context attributes set via @arizeai/openinference-core:
import { context } from "@opentelemetry/api" ;
import { setSession , setUser } from "@arizeai/openinference-core" ;
import OpenAI from "openai" ;
const client = new OpenAI ();
const enrichedContext = setUser (
setSession ( context . active (), { sessionId: "sess-123" }),
{ userId: "user-456" }
);
context . with ( enrichedContext , async () => {
// Session and user IDs will appear in the trace
await client . chat . completions . create ({
model: "gpt-4o-mini" ,
messages: [{ role: "user" , content: "Hello" }],
});
});
Complete Example
import { diag , DiagConsoleLogger , DiagLogLevel } from "@opentelemetry/api" ;
import { registerInstrumentations } from "@opentelemetry/instrumentation" ;
import { Resource } from "@opentelemetry/resources" ;
import { NodeTracerProvider , SimpleSpanProcessor } from "@opentelemetry/sdk-trace-node" ;
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto" ;
import { SemanticResourceAttributes } from "@opentelemetry/semantic-conventions" ;
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai" ;
// Enable debug logging (optional)
diag . setLogger ( new DiagConsoleLogger (), DiagLogLevel . INFO );
// Configure provider
const provider = new NodeTracerProvider ({
resource: new Resource ({
[SemanticResourceAttributes. SERVICE_NAME ]: "my-ai-app" ,
}),
});
// Add OTLP exporter
provider . addSpanProcessor (
new SimpleSpanProcessor (
new OTLPTraceExporter ({
url: process . env . COLLECTOR_ENDPOINT || "http://localhost:6006/v1/traces" ,
})
)
);
// Register instrumentations
registerInstrumentations ({
instrumentations: [
new OpenAIInstrumentation ({
traceConfig: {
hideInputImages: true ,
},
}),
],
});
// Register provider
provider . register ();
console . log ( "OpenInference instrumentation initialized" );
import OpenAI from "openai" ;
const client = new OpenAI ();
async function main () {
const response = await client . chat . completions . create ({
model: "gpt-4o-mini" ,
messages: [{ role: "user" , content: "What is OpenInference?" }],
});
console . log ( response . choices [ 0 ]. message . content );
}
main ();
node -r ./instrumentation.js ./app.js
Testing Instrumentations
When developing or testing instrumented applications:
Use console exporter for local development:
import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-base" ;
provider . addSpanProcessor (
new SimpleSpanProcessor ( new ConsoleSpanExporter ())
);
Enable debug logging to troubleshoot issues:
import { diag , DiagConsoleLogger , DiagLogLevel } from "@opentelemetry/api" ;
diag . setLogger ( new DiagConsoleLogger (), DiagLogLevel . DEBUG );
Verify context propagation :
import { context , trace } from "@opentelemetry/api" ;
const span = trace . getTracer ( "test" ). startSpan ( "test" );
const ctx = trace . setSpan ( context . active (), span );
context . with ( ctx , () => {
// Your instrumented code here
});
span . end ();
Best Practices
Load instrumentation first : Always load your instrumentation file before importing application code using the -r flag.
Use environment variables : Configure data masking via environment variables for flexibility across environments.
Respect suppressed tracing : When using suppressTracing(), ensure sensitive operations are not traced.
Batch exporters in production : Use BatchSpanProcessor instead of SimpleSpanProcessor for better performance:import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-node" ;
provider . addSpanProcessor (
new BatchSpanProcessor ( new OTLPTraceExporter ())
);
Next Steps
Core Package Use OITracer and context attributes
OpenAI OpenAI-specific instrumentation docs
LangChain LangChain-specific instrumentation docs
Examples View complete examples