Skip to main content
This example demonstrates how to instrument OpenAI’s chat completions API with OpenInference tracing.

Prerequisites

  • Python 3.9+
  • OpenAI API key
  • Phoenix or another OpenTelemetry collector running

Installation

1

Install dependencies

pip install openai \
  openinference-instrumentation-openai \
  opentelemetry-sdk \
  opentelemetry-exporter-otlp
2

Set environment variables

export OPENAI_API_KEY="your-api-key"

Complete Example

import openai
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor

from openinference.instrumentation import using_attributes
from openinference.instrumentation.openai import OpenAIInstrumentor

# Configure OpenTelemetry with OTLP exporter
endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))

# Instrument OpenAI
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

if __name__ == "__main__":
    client = openai.OpenAI()
    
    # Use context attributes for session tracking, metadata, and prompt templates
    with using_attributes(
        session_id="my-test-session",
        user_id="my-test-user",
        metadata={
            "test-int": 1,
            "test-str": "string",
            "test-list": [1, 2, 3],
            "test-dict": {
                "key-1": "val-1",
                "key-2": "val-2",
            },
        },
        tags=["tag-1", "tag-2"],
        prompt_template="Who won the soccer match in {city} on {date}",
        prompt_template_version="v1.0",
        prompt_template_variables={
            "city": "Johannesburg",
            "date": "July 11th",
        },
    ):
        response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": "Write a haiku."}],
            max_tokens=20,
        )
        print(response.choices[0].message.content)

Key Features

Automatic Instrumentation

The OpenAIInstrumentor automatically traces all OpenAI API calls without requiring code changes to your OpenAI usage.

Context Attributes

Use using_attributes() context manager to add:
  • Session tracking: session_id and user_id
  • Metadata: Custom key-value pairs for filtering and analysis
  • Tags: Labels for categorization
  • Prompt templates: Track template versions and variables

Multiple Exporters

This example uses both:
  • OTLPSpanExporter: Send traces to Phoenix or any OTLP-compatible backend
  • ConsoleSpanExporter: Print traces to console for debugging

Next Steps

Build docs developers (and LLMs) love