Overview
The @mariozechner/pi-agent-core package provides type-safe interfaces for building AI agents. This page documents the core types used throughout the package.
State Types
AgentState
interface AgentState {
systemPrompt: string;
model: Model<any>;
thinkingLevel: ThinkingLevel;
tools: AgentTool<any>[];
messages: AgentMessage[];
isStreaming: boolean;
streamMessage: AgentMessage | null;
pendingToolCalls: Set<string>;
error?: string;
}
Complete agent state containing configuration and conversation data.
System prompt sent to the LLM at the start of each request.
LLM model from @mariozechner/pi-ai (e.g., getModel('openai', 'gpt-4o')).
Reasoning level for models that support it: 'off' | 'minimal' | 'low' | 'medium' | 'high' | 'xhigh'.Note: 'xhigh' is only supported by OpenAI gpt-5.1-codex-max, gpt-5.2, gpt-5.2-codex, gpt-5.3, and gpt-5.3-codex models.
Tools available for the agent to execute.
Full conversation history including user, assistant, toolResult, and custom message types.
True when the agent is actively streaming a response.
Partial message being streamed (null when not streaming).
Set of tool call IDs currently being executed.
Error message from the last failed operation.
ThinkingLevel
type ThinkingLevel = 'off' | 'minimal' | 'low' | 'medium' | 'high' | 'xhigh';
Controls how much reasoning the model does before responding. Higher levels use more tokens but may produce better results for complex tasks.
'off': No explicit reasoning (fastest)
'minimal': Very brief reasoning
'low': Light reasoning for simple tasks
'medium': Balanced reasoning (recommended default)
'high': Deep reasoning for complex tasks
'xhigh': Maximum reasoning (OpenAI gpt-5.x models only)
Message Types
AgentMessage
type AgentMessage = Message | CustomAgentMessages[keyof CustomAgentMessages];
Union of standard LLM messages (from @mariozechner/pi-ai) and custom application messages. Applications can extend this via declaration merging:
declare module '@mariozechner/pi-agent-core' {
interface CustomAgentMessages {
artifact: ArtifactMessage;
notification: NotificationMessage;
}
}
CustomAgentMessages
interface CustomAgentMessages {
// Empty by default - extend via declaration merging
}
Extensible interface for custom message types. Use declaration merging to add app-specific messages:
// your-types.ts
import '@mariozechner/pi-agent-core';
interface ArtifactMessage {
role: 'artifact';
content: string;
artifactType: 'code' | 'diagram' | 'document';
timestamp: number;
}
declare module '@mariozechner/pi-agent-core' {
interface CustomAgentMessages {
artifact: ArtifactMessage;
}
}
Event Types
AgentEvent
type AgentEvent =
| { type: 'agent_start' }
| { type: 'agent_end'; messages: AgentMessage[] }
| { type: 'turn_start' }
| { type: 'turn_end'; message: AgentMessage; toolResults: ToolResultMessage[] }
| { type: 'message_start'; message: AgentMessage }
| { type: 'message_update'; message: AgentMessage; assistantMessageEvent: AssistantMessageEvent }
| { type: 'message_end'; message: AgentMessage }
| { type: 'tool_execution_start'; toolCallId: string; toolName: string; args: any }
| { type: 'tool_execution_update'; toolCallId: string; toolName: string; args: any; partialResult: any }
| { type: 'tool_execution_end'; toolCallId: string; toolName: string; result: any; isError: boolean };
Events emitted by the agent during execution. Subscribe via agent.subscribe().
Agent Lifecycle
Emitted when the agent starts processing.
agent_end
{ type: 'agent_end'; messages: AgentMessage[] }
Emitted when the agent completes processing. Contains all new messages added during this run.
Turn Lifecycle
Emitted at the start of each turn (one assistant response + any tool calls/results).
turn_end
{ type: 'turn_end'; message: AgentMessage; toolResults: ToolResultMessage[] }
Emitted when a turn completes. Contains the assistant message and any tool results from this turn.
Message Lifecycle
message_start
{ type: 'message_start'; message: AgentMessage }
Emitted when a new message starts (user, assistant, or tool result).
message_update
{ type: 'message_update'; message: AgentMessage; assistantMessageEvent: AssistantMessageEvent }
Emitted during streaming of assistant messages. Only emitted for assistant messages.
message_end
{ type: 'message_end'; message: AgentMessage }
Emitted when a message is complete and added to the conversation history.
tool_execution_start
{ type: 'tool_execution_start'; toolCallId: string; toolName: string; args: any }
Emitted when a tool starts executing.
tool_execution_update
{ type: 'tool_execution_update'; toolCallId: string; toolName: string; args: any; partialResult: any }
Emitted when a tool sends a partial result via the onUpdate callback.
tool_execution_end
{ type: 'tool_execution_end'; toolCallId: string; toolName: string; result: any; isError: boolean }
Emitted when a tool completes. isError indicates whether the tool threw an error.
Configuration Types
AgentContext
interface AgentContext {
systemPrompt: string;
messages: AgentMessage[];
tools?: AgentTool<any>[];
}
Context passed to the agent loop. Similar to Context from @mariozechner/pi-ai but uses AgentTool instead of Tool.
AgentLoopConfig
interface AgentLoopConfig extends SimpleStreamOptions {
model: Model<any>;
convertToLlm: (messages: AgentMessage[]) => Message[] | Promise<Message[]>;
transformContext?: (messages: AgentMessage[], signal?: AbortSignal) => Promise<AgentMessage[]>;
getApiKey?: (provider: string) => Promise<string | undefined> | string | undefined;
getSteeringMessages?: () => Promise<AgentMessage[]>;
getFollowUpMessages?: () => Promise<AgentMessage[]>;
}
Configuration for the agent loop. Extends SimpleStreamOptions from @mariozechner/pi-ai.
LLM model to use for generation.
convertToLlm
(messages: AgentMessage[]) => Message[] | Promise<Message[]>
required
Converts AgentMessage[] to LLM-compatible Message[] before each LLM call.Each AgentMessage must be converted to a UserMessage, AssistantMessage, or ToolResultMessage. Messages that cannot be converted (e.g., UI-only notifications) should be filtered out.convertToLlm: (messages) => messages.flatMap(m => {
if (m.role === 'custom') {
return [{ role: 'user', content: m.content, timestamp: m.timestamp }];
}
if (m.role === 'notification') {
return []; // Filter out
}
return [m]; // Pass through standard messages
})
transformContext
(messages: AgentMessage[], signal?: AbortSignal) => Promise<AgentMessage[]>
Optional transform applied to the context before convertToLlm.Use for:
- Context window management (pruning old messages)
- Injecting context from external sources
transformContext: async (messages) => {
if (estimateTokens(messages) > MAX_TOKENS) {
return pruneOldMessages(messages);
}
return messages;
}
getApiKey
(provider: string) => Promise<string | undefined> | string | undefined
Resolves an API key dynamically for each LLM call.Useful for short-lived OAuth tokens (e.g., GitHub Copilot) that may expire during long-running tool execution.getApiKey: async (provider) => {
if (provider === 'github') {
return await refreshGitHubToken();
}
return process.env.API_KEY;
}
getSteeringMessages
() => Promise<AgentMessage[]>
Returns steering messages to inject into the conversation mid-run.Called after each tool execution to check for user interruptions. If messages are returned, remaining tool calls are skipped and these messages are added to the context before the next LLM call.Use for “steering” the agent while it’s working.
getFollowUpMessages
() => Promise<AgentMessage[]>
Returns follow-up messages to process after the agent would otherwise stop.Called when the agent has no more tool calls and no steering messages. If messages are returned, they’re added to the context and the agent continues with another turn.Use for follow-up messages that should wait until the agent finishes.
StreamFn
type StreamFn = (
...args: Parameters<typeof streamSimple>
) => ReturnType<typeof streamSimple> | Promise<ReturnType<typeof streamSimple>>;
Custom stream function type. Can be sync or async to support dynamic configuration lookup.
const customStreamFn: StreamFn = async (model, context, options) => {
const config = await loadConfig();
return streamSimple(model, context, { ...options, ...config });
};
Transport Types
Transport
type Transport = 'sse' | 'responses';
Preferred transport mechanism for LLM providers:
'sse': Server-Sent Events (default, better for streaming)
'responses': HTTP responses (better for compatibility)
Proxy Types
streamProxy
function streamProxy(
model: Model<any>,
context: Context,
options: ProxyStreamOptions
): ProxyMessageEventStream
Stream function that proxies through a backend server instead of calling LLM providers directly.
import { streamProxy } from '@mariozechner/pi-agent-core';
const agent = new Agent({
streamFn: (model, context, options) =>
streamProxy(model, context, {
...options,
authToken: await getAuthToken(),
proxyUrl: 'https://api.example.com',
}),
});
ProxyStreamOptions
interface ProxyStreamOptions extends SimpleStreamOptions {
authToken: string;
proxyUrl: string;
}
Auth token for the proxy server.
Proxy server URL (e.g., https://genai.example.com).
ProxyAssistantMessageEvent
type ProxyAssistantMessageEvent =
| { type: 'start' }
| { type: 'text_start'; contentIndex: number }
| { type: 'text_delta'; contentIndex: number; delta: string }
| { type: 'text_end'; contentIndex: number; contentSignature?: string }
| { type: 'thinking_start'; contentIndex: number }
| { type: 'thinking_delta'; contentIndex: number; delta: string }
| { type: 'thinking_end'; contentIndex: number; contentSignature?: string }
| { type: 'toolcall_start'; contentIndex: number; id: string; toolName: string }
| { type: 'toolcall_delta'; contentIndex: number; delta: string }
| { type: 'toolcall_end'; contentIndex: number }
| { type: 'done'; reason: 'stop' | 'length' | 'toolUse'; usage: AssistantMessage['usage'] }
| { type: 'error'; reason: 'aborted' | 'error'; errorMessage?: string; usage: AssistantMessage['usage'] };
Events sent by the proxy server. The partial field is stripped to reduce bandwidth - the client reconstructs it.
Loop Functions
agentLoop
function agentLoop(
prompts: AgentMessage[],
context: AgentContext,
config: AgentLoopConfig,
signal?: AbortSignal,
streamFn?: StreamFn
): EventStream<AgentEvent, AgentMessage[]>
Start an agent loop with new prompt messages. The prompts are added to the context and events are emitted.
import { agentLoop } from '@mariozechner/pi-agent-core';
const stream = agentLoop(
[{ role: 'user', content: 'Hello!', timestamp: Date.now() }],
{ systemPrompt: 'You are helpful', messages: [], tools: [] },
config,
abortSignal
);
for await (const event of stream) {
console.log('Event:', event);
}
const allMessages = await stream.result();
agentLoopContinue
function agentLoopContinue(
context: AgentContext,
config: AgentLoopConfig,
signal?: AbortSignal,
streamFn?: StreamFn
): EventStream<AgentEvent, AgentMessage[]>
Continue an agent loop from the current context without adding a new message. Used for retries.
Important: The last message in context must convert to a user or toolResult message via convertToLlm.
import { agentLoopContinue } from '@mariozechner/pi-agent-core';
const stream = agentLoopContinue(context, config, abortSignal);
for await (const event of stream) {
console.log('Event:', event);
}
Example: Custom Message Type
// 1. Define your custom message type
interface NotificationMessage {
role: 'notification';
level: 'info' | 'warning' | 'error';
message: string;
timestamp: number;
}
// 2. Extend CustomAgentMessages via declaration merging
declare module '@mariozechner/pi-agent-core' {
interface CustomAgentMessages {
notification: NotificationMessage;
}
}
// 3. Now AgentMessage includes your type
import { Agent } from '@mariozechner/pi-agent-core';
const agent = new Agent({
convertToLlm: (messages) => {
return messages.flatMap(m => {
// Filter out notifications - they're UI-only
if (m.role === 'notification') {
return [];
}
// Pass through standard messages
return [m];
});
}
});
// 4. Add notification to conversation
const notification: NotificationMessage = {
role: 'notification',
level: 'info',
message: 'Task completed successfully',
timestamp: Date.now()
};
agent.appendMessage(notification);