Skip to main content

Overview

The openai() function creates a ModelProvider instance configured to use OpenAI’s chat completion API. It supports all GPT models including GPT-4, GPT-4 Turbo, GPT-4o, and GPT-3.5 models.

Usage

import { Agent } from '@agentlib/core'
import { openai } from '@agentlib/openai'

const agent = new Agent({
  name: 'assistant',
  model: openai({
    apiKey: process.env.OPENAI_API_KEY!,
    model: 'gpt-4o'
  })
})

Configuration

apiKey
string
required
Your OpenAI API key. Get one from platform.openai.com/api-keys
model
string
default:"gpt-4o"
The OpenAI model to use. Common options:
  • gpt-4o - Latest GPT-4 Optimized model
  • gpt-4o-mini - Smaller, faster GPT-4 variant
  • gpt-4-turbo - GPT-4 Turbo with 128k context
  • gpt-3.5-turbo - Faster, more economical option
  • o1 - Reasoning model (note: system prompts converted to user messages)
  • o3 - Advanced reasoning model
baseURL
string
Custom API endpoint URL. Use this for:
  • Azure OpenAI deployments
  • OpenAI-compatible APIs
  • Proxy services
Example: https://your-resource.openai.azure.com
organization
string
OpenAI organization ID for usage tracking and billing isolation
temperature
number
default:"0.7"
Controls randomness in responses. Range: 0.0 to 2.0
  • Lower values (e.g., 0.2) = more focused, deterministic
  • Higher values (e.g., 1.5) = more creative, varied
maxTokens
number
default:"128000"
Maximum tokens in the response. Note:
  • Total context = prompt + completion tokens
  • Model limits vary (e.g., gpt-4o has 128k context)
  • Setting too low may truncate responses

Examples

Basic Setup

import { openai } from '@agentlib/openai'

const provider = openai({
  apiKey: process.env.OPENAI_API_KEY!
})

Custom Model and Temperature

const provider = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'gpt-4o-mini',
  temperature: 0.2  // More deterministic
})

Azure OpenAI

const provider = openai({
  apiKey: process.env.AZURE_OPENAI_KEY!,
  baseURL: 'https://your-resource.openai.azure.com/openai/deployments/your-deployment',
  model: 'gpt-4o'
})

Organization Scoping

const provider = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  organization: 'org-xxxxxxxxxxxxx',
  model: 'gpt-4-turbo'
})

Token Limit Control

const provider = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'gpt-3.5-turbo',
  maxTokens: 4096  // Limit response length
})

Special Model Handling

Reasoning Models (o1, o3)

When using OpenAI’s reasoning models (o1-* or o3-*), the provider automatically:
  • Converts system role messages to user role (required by these models)
  • Extracts reasoning content from responses when available
const provider = openai({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'o1'
})

// System prompts are automatically converted to user messages
const agent = new Agent({
  name: 'reasoner',
  model: provider,
  systemPrompt: 'You are a logical reasoning assistant'  // Sent as user message
})

Tool Calling

The provider automatically handles tool/function calling:
import { z } from 'zod'
import { tool } from '@agentlib/core'

const agent = new Agent({
  name: 'assistant',
  model: openai({
    apiKey: process.env.OPENAI_API_KEY!,
    model: 'gpt-4o'
  }),
  tools: [
    tool({
      name: 'get_weather',
      description: 'Get weather for a location',
      schema: z.object({
        location: z.string()
      }),
      execute: async ({ location }) => {
        return { temp: 72, condition: 'sunny' }
      }
    })
  ]
})

// Model can call tools automatically
const result = await agent.run({ input: 'What\'s the weather in NYC?' })

Return Value

Returns an OpenAIProvider instance that implements the ModelProvider interface with:
  • name: 'openai'
  • complete(request) - Send completion request
  • stream(request) - Stream completion response
See ModelProvider Interface for full interface documentation.

Build docs developers (and LLMs) love