Skip to main content
The Phoenix TypeScript SDK provides comprehensive tools for AI observability, tracing, and evaluation in TypeScript and JavaScript applications.

Packages

The Phoenix TypeScript SDK consists of three main packages:

@arizeai/phoenix-client

The core client library for interacting with Phoenix programmatically. Use this to:
  • Manage projects, datasets, and experiments
  • Query and annotate traces and spans
  • Create and version prompts
  • Run evaluations and experiments
Learn more about the Client →

@arizeai/phoenix-otel

OpenTelemetry integration for automatic tracing of LLM applications. Use this to:
  • Enable OpenTelemetry-based instrumentation
  • Configure trace export to Phoenix
  • Set up batching and performance optimization
  • Register custom instrumentations
Learn more about OTEL →

@arizeai/phoenix-evals

Evaluation framework for assessing LLM outputs. Use this to:
  • Run LLM-based evaluations (hallucination, relevance, correctness, etc.)
  • Create custom evaluators
  • Integrate evaluations into your workflow
  • Use built-in evaluation templates
Learn more about Evals →

Installation

Install the packages you need using npm, yarn, or pnpm:
npm install @arizeai/phoenix-client

Quick Start

Here’s how the packages work together:
// 1. Set up tracing with OTEL
import { register } from '@arizeai/phoenix-otel';

const provider = register({
  projectName: 'my-llm-app',
  batch: true
});

// 2. Your LLM application runs and generates traces automatically
import OpenAI from 'openai';

const client = new OpenAI();
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }]
});

// 3. Use the client to query traces
import { createClient } from '@arizeai/phoenix-client';

const phoenixClient = createClient();
const projects = await phoenixClient.GET('/v1/projects');

// 4. Run evaluations on your data
import { createHallucinationEvaluator } from '@arizeai/phoenix-evals';

const evaluator = createHallucinationEvaluator({
  model: 'gpt-4o'
});

const result = await evaluator({
  output: response.choices[0].message.content,
  context: 'System context'
});

Common Workflows

Development Workflow

  1. Instrument your app with register() to capture traces
  2. Run your application and generate traces automatically
  3. Review traces in the Phoenix UI
  4. Run evaluations to assess quality
  5. Iterate on prompts and configuration

Production Workflow

  1. Configure OTEL with batching for performance
  2. Set up continuous evaluation using the evals package
  3. Monitor metrics via the Phoenix UI
  4. Use the client for programmatic access to data
  5. Create datasets from production traces for testing

Environment Variables

Configure the SDK using environment variables:
PHOENIX_COLLECTOR_URL
string
Phoenix server URL (default: http://localhost:6006)
PHOENIX_API_KEY
string
API key for authentication with Phoenix Cloud
PHOENIX_BASE_URL
string
Base URL for Phoenix API client (default: http://localhost:6006)

TypeScript Support

All packages are written in TypeScript and provide full type definitions:
import type { PhoenixClient } from '@arizeai/phoenix-client';
import type { RegisterParams } from '@arizeai/phoenix-otel';
import type { Evaluator } from '@arizeai/phoenix-evals';

// Fully typed client
const client = createClient();
const response = await client.GET('/v1/projects');
// ^? response is fully typed based on OpenAPI spec

// Type-safe evaluator configuration
const evaluator: Evaluator = createHallucinationEvaluator({
  model: 'gpt-4o',
  temperature: 0.0
});

Framework Compatibility

The Phoenix TypeScript SDK works with:
  • Node.js (v18+)
  • Next.js
  • Express
  • Vercel AI SDK
  • LangChain.js
  • Any TypeScript/JavaScript environment

Next Steps

TypeScript Client

Interact with Phoenix programmatically

TypeScript OTEL

Set up OpenTelemetry tracing

TypeScript Evals

Evaluate LLM outputs

Build docs developers (and LLMs) love