Overview
The Pi AI toolkit includes a comprehensive model registry with automatic model discovery for all supported providers. Models are strongly typed and include metadata about capabilities, pricing, and API configuration.getModel()
Retrieve a specific model from the registry.The provider name. Fully typed with autocomplete support.Supported providers:
'openai', 'anthropic', 'google', 'google-vertex', 'google-gemini-cli', 'amazon-bedrock', 'azure-openai-responses', 'openai-codex', 'github-copilot', 'xai', 'groq', 'cerebras', 'mistral', 'openrouter', 'vercel-ai-gateway', 'zai', 'minimax', 'minimax-cn', 'huggingface', 'kimi-coding', and more.The model identifier. Autocomplete suggests valid models for the provider.Examples:
- OpenAI:
'gpt-4o','gpt-4o-mini','o3-mini' - Anthropic:
'claude-sonnet-4-20250514','claude-3-5-haiku-20241022' - Google:
'gemini-2.5-flash','gemini-2.0-flash-exp'
A fully typed model object containing metadata and configuration.
Example
getProviders()
Get all available providers.Array of all registered provider names.
Example
getModels()
Get all models from a specific provider.The provider name.
Array of all models from the provider.
Example
Model Type
TheModel interface contains all metadata for a model.
Model identifier used with the provider’s API.
Human-readable model name.
The API protocol this model uses. Examples:
"openai-completions", "anthropic-messages", "google-generative-ai".Provider name (e.g.,
"openai", "anthropic").Base URL for API requests.
Whether the model supports thinking/reasoning capabilities.
Supported input types. Check
model.input.includes('image') for vision support.Pricing in dollars per million tokens.
input: Input token costoutput: Output token costcacheRead: Cache read cost (for prompt caching)cacheWrite: Cache write cost (for prompt caching)
Maximum context length in tokens.
Maximum output tokens per request.
Optional custom headers for all requests to this model.
Compatibility settings for OpenAI-compatible APIs. See Custom Models below.
Custom Models
Create custom model configurations for local servers or custom endpoints:Custom Headers Example
OpenAI Compatibility Settings
For OpenAI-compatible APIs with non-standard behavior, use thecompat field:
Whether the provider supports the
store field.Whether the provider supports
developer role (vs system).Whether the provider supports
reasoning_effort parameter.Whether streaming responses include token usage via
stream_options.Whether the provider supports
strict in tool definitions.Which field name to use for max tokens.
Whether tool results require the
name field.Whether a user message after tool results requires an assistant message in between.
Whether thinking blocks must be converted to text with
<thinking> delimiters.Whether tool call IDs must be normalized to Mistral format (exactly 9 alphanumeric chars).
Format for reasoning parameter:
"openai": Usesreasoning_effort"zai": Usesthinking: { type: "enabled" }"qwen": Usesenable_thinking: boolean
Provider APIs
Each model uses a specific API protocol. Common APIs:OpenAI Chat Completions API. Used by OpenAI, Mistral, xAI, Groq, Cerebras, and OpenAI-compatible providers.
OpenAI Responses API. Used by OpenAI’s latest models including GPT-5.x and o3.
Anthropic Messages API. Used by Claude models.
Google Generative AI API. Used by Gemini models via Google AI Studio.
Google Vertex AI API. Used by Gemini models via Google Cloud Vertex AI.
Amazon Bedrock Converse Stream API. Used by models on AWS Bedrock.
Azure OpenAI Responses API. Used by models on Azure OpenAI.
Utility Functions
calculateCost()
Calculate costs for a given usage.Example
supportsXhigh()
Check if a model supports thexhigh thinking level.
- GPT-5.2 and GPT-5.3 model families
- Anthropic Opus 4.6 models (xhigh maps to adaptive effort “max”)
modelsAreEqual()
Compare two models for equality.id and provider fields. Returns false if either model is null/undefined.