Skip to main content

@agentlib/openai

OpenAI model provider for AgentLIB — supports GPT-4o, GPT-4, o1, o3-mini, and any OpenAI-compatible API.

Installation

npm install @agentlib/openai

Overview

The @agentlib/openai package provides a model provider that integrates OpenAI’s chat completion API with AgentLIB agents. It handles:
  • Message format conversion between AgentLIB and OpenAI formats
  • Tool calling (function calling) support
  • Token usage tracking
  • Streaming support
  • Model-specific handling (e.g., o1/o3 models)
  • Custom base URLs for OpenAI-compatible APIs

Quick Start

import { createAgent } from '@agentlib/core'
import { openai } from '@agentlib/openai'

const agent = createAgent({
  name: 'assistant',
  model: openai({
    apiKey: process.env.OPENAI_API_KEY!,
    model: 'gpt-4o'
  })
})

const result = await agent.run('What is the capital of France?')
console.log(result.output)

Configuration

OpenAIProviderConfig

interface OpenAIProviderConfig {
  /** OpenAI API key (required) */
  apiKey: string
  
  /** Model to use (default: 'gpt-4o') */
  model?: string
  
  /** Custom base URL for OpenAI-compatible APIs */
  baseURL?: string
  
  /** OpenAI organization ID */
  organization?: string
  
  /** Temperature for sampling (default: 0.7) */
  temperature?: number
  
  /** Maximum tokens in response (default: 128000) */
  maxTokens?: number
}

Usage Examples

Basic Configuration

import { openai } from '@agentlib/openai'

const model = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'gpt-4o',
  temperature: 0.7
})

agent.model(model)

GPT-4o Mini (Cost-Effective)

const model = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'gpt-4o-mini',
  temperature: 0.5
})

OpenAI o1 Models

// o1 models have special handling for system messages
const model = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'o1-preview'
})
Note: o1 and o3 models automatically convert system messages to user messages since they don’t support the system role.

Custom Base URL (OpenAI-Compatible APIs)

// Use with Azure OpenAI, LocalAI, etc.
const model = openai({
  apiKey: 'your-key',
  baseURL: 'https://your-custom-endpoint.com/v1',
  model: 'gpt-4'
})

With Organization

const model = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  organization: 'org-xxxxxxxxxxxxx',
  model: 'gpt-4o'
})

Token Limits

const model = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'gpt-4o',
  maxTokens: 4096 // Limit response length
})

Tool Calling Support

The provider automatically handles tool calling (function calling):
import { createAgent, defineTool } from '@agentlib/core'
import { openai } from '@agentlib/openai'

const weatherTool = defineTool({
  schema: {
    name: 'get_weather',
    description: 'Get current weather',
    parameters: {
      type: 'object',
      properties: {
        location: { type: 'string' }
      },
      required: ['location']
    }
  },
  execute: async ({ location }) => {
    return { temp: 72, condition: 'sunny' }
  }
})

const agent = createAgent({
  name: 'assistant',
  model: openai({ apiKey: process.env.OPENAI_API_KEY! }),
  tools: [weatherTool]
})

await agent.run('What\'s the weather in Paris?')

Streaming Support

The provider supports streaming responses:
const model = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'gpt-4o'
})

// Streaming is handled internally by the provider
for await (const chunk of model.stream(request)) {
  process.stdout.write(chunk.delta)
  if (chunk.done) break
}

Token Usage Tracking

Token usage is automatically tracked and returned:
const result = await agent.run('Hello, world!')

console.log(result.state.usage)
// {
//   promptTokens: 12,
//   completionTokens: 8,
//   totalTokens: 20
// }

Exports

Classes

  • OpenAIProvider - The main provider class implementing ModelProvider

Functions

  • openai(config) - Factory function to create an OpenAI provider

Types

  • OpenAIProviderConfig - Configuration interface

Error Handling

try {
  const result = await agent.run('Hello')
} catch (error) {
  if (error instanceof Error) {
    console.error('Agent error:', error.message)
  }
}
Common errors:
  • Invalid API key
  • Rate limiting
  • Maximum token length exceeded
  • Invalid tool call JSON

Requirements

  • Node.js: >= 18.0.0
  • Dependencies:
    • @agentlib/core (workspace dependency)
    • openai ^4.52.0

Environment Variables

# .env file
OPENAI_API_KEY=sk-...
import 'dotenv/config'
import { openai } from '@agentlib/openai'

const model = openai({
  apiKey: process.env.OPENAI_API_KEY!
})

Build docs developers (and LLMs) love