Skip to main content
Phoenix is built on OpenTelemetry, the industry-standard observability framework. This means you can use OpenTelemetry’s powerful primitives to build custom instrumentation for any LLM application, regardless of the framework or SDK.

How Phoenix Uses OpenTelemetry

Phoenix leverages OpenTelemetry in several key ways:
  1. OTLP Protocol: Phoenix receives traces via the OpenTelemetry Protocol (OTLP)
  2. OpenInference Conventions: Traces follow semantic conventions optimized for LLM observability
  3. Auto-instrumentation: Framework integrations use OpenTelemetry instrumentors
  4. Custom spans: You can create custom spans using OpenTelemetry APIs

OTLP Endpoints

Phoenix exposes OTLP endpoints for receiving trace data:
ProtocolEndpointDefault Port
HTTPhttp://localhost:6006/v1/traces6006
gRPCgrpc://localhost:43174317

Configuration

You can configure the OTLP exporter when registering Phoenix:
from phoenix.otel import register

tracer_provider = register(
    endpoint="http://localhost:6006/v1/traces",
    protocol="http/protobuf",  # or "grpc"
    project_name="my-app"
)

Environment Variables

You can also configure OTLP using standard OpenTelemetry environment variables:
# HTTP Protocol
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:6006
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

# gRPC Protocol
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_EXPORTER_OTLP_PROTOCOL=grpc

# Authentication (if using cloud Phoenix)
export OTEL_EXPORTER_OTLP_HEADERS="api_key=your-api-key"

Custom Instrumentation

While Phoenix provides auto-instrumentation for popular frameworks, you can create custom spans for any code using OpenTelemetry.

Basic Custom Spans

from phoenix.otel import register
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode

# Register Phoenix
tracer_provider = register(project_name="custom-app")
tracer = tracer_provider.get_tracer(__name__)

# Create a custom span
with tracer.start_as_current_span("my-custom-operation") as span:
    span.set_attribute("custom.attribute", "value")
    
    # Your code here
    result = perform_operation()
    
    span.set_attribute("result", result)
    span.set_status(Status(StatusCode.OK))

OpenInference Span Kinds

For LLM applications, use OpenInference semantic conventions to get rich UI visualization:
from phoenix.otel import register
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode

tracer_provider = register()
tracer = tracer_provider.get_tracer(__name__)

# LLM Span
with tracer.start_as_current_span(
    "llm-call",
    attributes={
        "openinference.span.kind": "LLM",
        "llm.model_name": "gpt-4",
        "llm.invocation_parameters": '{"temperature": 0.7}',
    }
) as span:
    response = call_llm()
    span.set_attribute("llm.token_count.total", response.usage.total_tokens)
    span.set_status(Status(StatusCode.OK))

# Retriever Span
with tracer.start_as_current_span(
    "retrieval",
    attributes={
        "openinference.span.kind": "RETRIEVER",
    }
) as span:
    docs = retrieve_documents(query)
    span.set_attribute("retrieval.documents", str([doc.text for doc in docs]))

Available Span Kinds

Span KindDescriptionUse Case
LLMLanguage model inferenceGPT calls, Claude calls
CHAINSequence of operationsLangChain chains, pipelines
AGENTAutonomous reasoningReAct agents, function calling
TOOLExternal function callAPI calls, calculations
RETRIEVERDocument retrievalVector search, database query
EMBEDDINGEmbedding generationText embeddings
RERANKERDocument rerankingCross-encoder reranking
GUARDRAILSafety checksContent filtering
EVALUATORQuality assessmentLLM-as-a-judge

Using Helper Functions

Phoenix provides helper decorators and context managers:
from phoenix.otel import register

tracer_provider = register()
tracer = tracer_provider.get_tracer(__name__)

# Decorator for functions
@tracer.chain
def my_chain(input_text: str) -> str:
    """This function will automatically be traced as a CHAIN span."""
    return process(input_text)

@tracer.tool
def my_tool(query: str) -> dict:
    """This function will automatically be traced as a TOOL span."""
    return {"result": query_api(query)}

@tracer.llm
def my_llm_call(prompt: str) -> str:
    """This function will automatically be traced as an LLM span."""
    return call_llm(prompt)

Headers and Authentication

For cloud deployments or secured Phoenix instances, add authentication headers:
from phoenix.otel import register

tracer_provider = register(
    endpoint="https://app.phoenix.arize.com/v1/traces",
    headers={"api_key": "your-api-key"},
    project_name="my-app"
)

Batching and Performance

Phoenix uses OpenTelemetry’s batching for efficient trace export:
from phoenix.otel import register

tracer_provider = register(
    project_name="my-app",
    # Batch settings
    batch_size=512,  # Max spans per batch
    schedule_delay=5000,  # Max delay in ms
)

Resources

OpenInference Spec

Semantic conventions for LLM traces

OpenTelemetry Docs

Official OpenTelemetry documentation

Manual Instrumentation Guide

Detailed instrumentation guide

Phoenix OTEL Package

Python package documentation

Build docs developers (and LLMs) love