Skip to main content
This quickstart will get you from zero to tracing your first LLM application in under 5 minutes.

What You’ll Build

You’ll:
  1. Install Phoenix
  2. Launch the Phoenix server
  3. Instrument a simple OpenAI application
  4. View traces in the Phoenix UI
1

Install Phoenix

Install Phoenix using pip:
pip install arize-phoenix
This installs the complete Phoenix platform including the server, tracing capabilities, and evaluation tools.
Phoenix requires Python 3.10 or higher. See the Installation page for other installation methods.
2

Launch the Phoenix Server

Start the Phoenix server with a single command:
python -m phoenix.server.main serve
You’ll see output indicating the server has started:
INFO:     Started server process
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:6006 (Press CTRL+C to quit)
Open your browser and navigate to http://localhost:6006 to see the Phoenix UI.
Keep this terminal window open. Phoenix needs to keep running to collect traces.
3

Install OpenAI and Instrumentation

In a new terminal, install the OpenAI SDK and Phoenix’s OpenAI instrumentation:
pip install openai openinference-instrumentation-openai
4

Trace Your First Application

Create a file called app.py with the following code:
app.py
import os
from openai import OpenAI
from openinference.instrumentation.openai import OpenAIInstrumentor
from phoenix.otel import register

# Configure Phoenix tracing
tracer_provider = register(
    project_name="my-first-app",
    endpoint="http://localhost:6006/v1/traces"
)

# Instrument OpenAI
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# Create OpenAI client
client = OpenAI()

# Make a simple LLM call
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is Phoenix AI observability?"}
    ],
    temperature=0.7,
)

print(response.choices[0].message.content)
Replace "your-api-key-here" with your actual OpenAI API key.
5

Run Your Application

Execute your application:
python app.py
You should see the LLM’s response printed to your console.
6

View Traces in Phoenix

Return to the Phoenix UI at http://localhost:6006. You should now see:
  • Your project “my-first-app” in the projects list
  • A trace showing the complete LLM interaction
  • Detailed information including:
    • Input messages
    • Model response
    • Token usage
    • Latency
    • Model parameters (temperature, etc.)
Click on the trace to explore the full details of your LLM call.

What’s Next?

Congratulations! You’ve successfully traced your first LLM application with Phoenix. Here’s what to explore next:

Run Evaluations

Learn how to evaluate your LLM outputs for quality, hallucinations, and relevance.

Explore Integrations

Instrument LangChain, LlamaIndex, or other frameworks you’re using.

Create Datasets

Build datasets from your traces for experimentation and evaluation.

Deploy to Production

Learn how to deploy Phoenix for production use.

Explore Integrations

Instrument LangChain, LlamaIndex, or other frameworks you’re using.

Create Datasets

Build datasets from your traces for experimentation and evaluation.

Deploy to Production

Learn how to deploy Phoenix for production use.

Tracing Multiple Applications

You can trace multiple applications by using different project names:
tracer_provider = register(
    project_name="chatbot-v1",  # or "rag-pipeline", "agent-system", etc.
    endpoint="http://localhost:6006/v1/traces"
)
Each project appears separately in the Phoenix UI, making it easy to organize and compare different applications.

Using Phoenix Cloud

If you prefer not to self-host, you can use Phoenix Cloud instead:
  1. Sign up for a free account at app.phoenix.arize.com
  2. Get your API key from the settings page
  3. Update your code to point to Phoenix Cloud:
import os

os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={your_phoenix_api_key}"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com"

tracer_provider = register(project_name="my-first-app")

Troubleshooting

Check that:
  • The Phoenix server is running (visit http://localhost:6006)
  • Your application is using the correct endpoint (http://localhost:6006/v1/traces)
  • The instrumentation is properly configured before making LLM calls
  • There are no firewall rules blocking localhost:6006
Make sure you’ve installed the instrumentation package:
pip install openinference-instrumentation-openai
Ensure the Phoenix server is running. You should see it listening on port 6006:
python -m phoenix.server.main serve
Need help? Join our Slack community and ask in the #phoenix-support channel.

Build docs developers (and LLMs) love