Overview
The ModelProvider interface defines the contract for integrating language models with AgentLIB. Implement this interface to add support for any LLM API or service.
Interface Definition
interface ModelProvider {
name: string
complete(request: ModelRequest): Promise<ModelResponse>
stream?(request: ModelRequest): AsyncIterable<ModelResponseChunk>
}
Properties
Unique identifier for the provider (e.g., 'openai', 'anthropic', 'custom')
Methods
complete()
Send a completion request and receive a full response.
The completion request containing messages and optional tools
Returns: Promise<ModelResponse> - The model’s complete response
stream()
Stream a completion response in chunks (optional).
The completion request containing messages
Returns: AsyncIterable<ModelResponseChunk> - Stream of response deltas
Type Definitions
ModelRequest
Request sent to the model provider:
interface ModelRequest {
messages: ModelMessage[] // Conversation history
tools?: ToolSchema[] // Available tools for function calling
stream?: boolean // Whether to stream the response
}
ModelResponse
Complete response from the model:
interface ModelResponse {
message: ModelMessage // Assistant's response message
toolCalls?: ToolCall[] // Tool calls requested by model
usage?: TokenUsage // Token consumption stats
raw?: unknown // Raw API response (optional)
}
ModelMessage
A single message in the conversation:
interface ModelMessage {
role: 'system' | 'user' | 'assistant' | 'tool'
content: string // Message text content
reasoning?: string // Extended reasoning (e.g., o1 models)
toolCallId?: string // ID when role is 'tool'
toolCalls?: ToolCall[] // Tools called by assistant
}
A function call requested by the model:
interface ToolCall {
id: string // Unique call identifier
name: string // Tool name to execute
arguments: Record<string, unknown> // Parsed arguments
}
Tool definition passed to the model:
interface ToolSchema {
name: string // Tool identifier
description: string // What the tool does
parameters: Record<string, unknown> // JSON Schema for arguments
}
TokenUsage
Token consumption statistics:
interface TokenUsage {
promptTokens: number // Tokens in the prompt
completionTokens: number // Tokens in the completion
totalTokens: number // Total tokens used
}
ModelResponseChunk
Streaming response chunk:
interface ModelResponseChunk {
delta: string // Incremental content
done: boolean // Whether stream is complete
}
Implementation Example
Here’s a complete custom provider implementation:
import type { ModelProvider, ModelRequest, ModelResponse } from '@agentlib/core'
interface CustomProviderConfig {
apiKey: string
model?: string
baseURL?: string
}
class CustomProvider implements ModelProvider {
readonly name = 'custom'
private apiKey: string
private model: string
private baseURL: string
constructor(config: CustomProviderConfig) {
this.apiKey = config.apiKey
this.model = config.model ?? 'default-model'
this.baseURL = config.baseURL ?? 'https://api.example.com'
}
async complete(request: ModelRequest): Promise<ModelResponse> {
// Transform AgentLIB format to your API format
const apiRequest = {
model: this.model,
messages: request.messages.map(msg => ({
role: msg.role,
content: msg.content
})),
tools: request.tools?.map(tool => ({
name: tool.name,
description: tool.description,
parameters: tool.parameters
}))
}
// Call your API
const response = await fetch(`${this.baseURL}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(apiRequest)
})
if (!response.ok) {
throw new Error(`API error: ${response.statusText}`)
}
const data = await response.json()
// Transform API response to AgentLIB format
return {
message: {
role: 'assistant',
content: data.choices[0].message.content,
toolCalls: data.choices[0].message.tool_calls?.map(tc => ({
id: tc.id,
name: tc.function.name,
arguments: JSON.parse(tc.function.arguments)
}))
},
usage: {
promptTokens: data.usage.prompt_tokens,
completionTokens: data.usage.completion_tokens,
totalTokens: data.usage.total_tokens
},
raw: data
}
}
async *stream(request: ModelRequest): AsyncIterable<ModelResponseChunk> {
// Implement streaming if your API supports it
const response = await fetch(`${this.baseURL}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: this.model,
messages: request.messages,
stream: true
})
})
const reader = response.body?.getReader()
if (!reader) throw new Error('No response body')
const decoder = new TextDecoder()
let buffer = ''
while (true) {
const { done, value } = await reader.read()
if (done) {
yield { delta: '', done: true }
break
}
buffer += decoder.decode(value, { stream: true })
const lines = buffer.split('\n')
buffer = lines.pop() ?? ''
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6))
const delta = data.choices[0]?.delta?.content ?? ''
yield {
delta,
done: data.choices[0]?.finish_reason !== null
}
}
}
}
}
}
// Export factory function
export function customProvider(config: CustomProviderConfig): CustomProvider {
return new CustomProvider(config)
}
Usage
Use your custom provider with an agent:
import { Agent } from '@agentlib/core'
import { customProvider } from './custom-provider'
const agent = new Agent({
name: 'assistant',
model: customProvider({
apiKey: process.env.CUSTOM_API_KEY!,
model: 'my-model-v1'
})
})
const result = await agent.run({
input: 'Hello, world!'
})
Best Practices
Error Handling
Handle API errors gracefully:
async complete(request: ModelRequest): Promise<ModelResponse> {
try {
const response = await this.callAPI(request)
return this.transformResponse(response)
} catch (error) {
if (error instanceof RateLimitError) {
throw new Error('Rate limit exceeded. Please try again later.')
}
if (error instanceof AuthError) {
throw new Error('Invalid API key')
}
throw new Error(`Provider error: ${error.message}`)
}
}
Properly handle tool/function calling:
async complete(request: ModelRequest): Promise<ModelResponse> {
const hasTools = request.tools && request.tools.length > 0
const apiRequest = {
messages: request.messages,
...(hasTools && {
tools: request.tools.map(tool => ({
type: 'function',
function: {
name: tool.name,
description: tool.description,
parameters: tool.parameters
}
}))
})
}
// Handle response with potential tool calls
const response = await this.callAPI(apiRequest)
return {
message: {
role: 'assistant',
content: response.content,
toolCalls: response.tool_calls?.map(tc => ({
id: tc.id,
name: tc.name,
arguments: tc.arguments
}))
},
toolCalls: response.tool_calls
}
}
Token Usage Tracking
Always include usage data when available:
return {
message: { role: 'assistant', content: data.content },
usage: data.usage ? {
promptTokens: data.usage.input_tokens,
completionTokens: data.usage.output_tokens,
totalTokens: data.usage.input_tokens + data.usage.output_tokens
} : undefined
}
Store Raw Responses
Include the raw API response for debugging:
return {
message: transformedMessage,
usage: transformedUsage,
raw: originalResponse // Helpful for debugging
}