Overview
The@mariozechner/pi-ai package provides a unified interface to multiple LLM providers with automatic model discovery, token counting, cost tracking, and cross-provider context handoffs.
Unified API
Single interface for OpenAI, Anthropic, Google, and 12+ other providers
Type Safety
Full TypeScript support with auto-complete for providers and models
Tool Calling
TypeBox schemas with automatic validation and partial JSON streaming
Cross-Provider Handoff
Switch models mid-conversation with automatic context transformation
Installation
Quick Start
Key Features
Supported Providers
All providers support tool calling (function calling) for agentic workflows: API Key Providers:- OpenAI (GPT-4o, GPT-5, o1, o3)
- Anthropic (Claude Sonnet, Opus, Haiku)
- Google Gemini
- Amazon Bedrock
- Mistral AI
- Groq
- Cerebras
- xAI (Grok)
- OpenRouter
- Vercel AI Gateway
- zAI
- MiniMax
- Kimi For Coding
- Hugging Face
- OpenAI Codex (ChatGPT Plus/Pro, GPT-5.x Codex models)
- GitHub Copilot
- Google Gemini CLI (Cloud Code Assist)
- Google Antigravity (free Gemini, Claude, GPT-OSS)
- Azure OpenAI (Responses API)
- Google Vertex AI (with ADC)
- Any OpenAI-compatible API (Ollama, vLLM, LM Studio, etc.)
Tool Calling with TypeBox
Define tools with TypeBox schemas for type-safe validation:Streaming with Events
Stream responses with granular event types:start- Stream beginstext_start,text_delta,text_end- Text generationthinking_start,thinking_delta,thinking_end- Reasoning/thinkingtoolcall_start,toolcall_delta,toolcall_end- Tool callsdone- Completionerror- Error or abort