Skip to main content

Overview

The AgentInstance class is the core runtime that orchestrates agent execution. It manages the model provider, tools, memory, middleware pipeline, and delegates reasoning to a pluggable ReasoningEngine. Instances are typically created via createAgent() rather than directly instantiated.

Class Signature

class AgentInstance<TData = unknown> {
  constructor(config?: AgentConfig<TData>)
  
  // Configuration methods
  provider(model: ModelProvider): this
  tool(definition: ToolDefinition<TData>): this
  use(middleware: Middleware<TData>): this
  memory(provider: MemoryProvider): this
  policy(policy: AgentPolicy): this
  reasoning(engine: ReasoningEngine<TData> | ReasoningStrategy): this
  set(key: string, value: unknown): this
  get(key: string): unknown
  
  // Event handling
  on<K extends keyof AgentEventMap>(
    event: K,
    handler: EventHandler<AgentEventMap[K]>
  ): this
  
  // Execution
  run(options: RunOptions<TData> | string): Promise<RunResult>
  cancel(reason?: string): void
}

Constructor

new AgentInstance(config?: AgentConfig<TData>)
config
AgentConfig<TData>
Initial agent configuration. Defaults to { name: 'agent' } if omitted.

Configuration Methods

All configuration methods return this for fluent chaining.

provider()

Set the LLM provider.
provider(model: ModelProvider): this
model
ModelProvider
required
Model provider instance (e.g., from @agentlib/openai, @agentlib/anthropic)
Example:
import { createAgent } from '@agentlib/core'
import { openai } from '@agentlib/openai'

const agent = createAgent({ name: 'assistant' })
  .provider(openai({ apiKey: process.env.OPENAI_API_KEY }))

tool()

Register a tool definition.
tool(definition: ToolDefinition<TData>): this
definition
ToolDefinition<TData>
required
Tool definition with schema and execute function
Example:
import { createAgent, defineTool } from '@agentlib/core'

const searchTool = defineTool({
  schema: {
    name: 'search',
    description: 'Search the web',
    parameters: {
      type: 'object',
      properties: {
        query: { type: 'string', description: 'Search query' }
      },
      required: ['query']
    }
  },
  execute: async (args) => {
    return `Results for: ${args.query}`
  }
})

const agent = createAgent({ name: 'searcher' })
  .tool(searchTool)

use()

Add middleware to the execution pipeline.
use(middleware: Middleware<TData>): this
middleware
Middleware<TData>
required
Middleware function with name, scope, and run method
Example:
const loggingMiddleware = {
  name: 'logger',
  scope: ['run:before', 'run:after'],
  async run(mCtx, next) {
    console.log(`[${mCtx.scope}] Starting...`)
    await next()
    console.log(`[${mCtx.scope}] Complete`)
  }
}

const agent = createAgent({ name: 'logged-agent' })
  .use(loggingMiddleware)

memory()

Set the memory provider for conversation history.
memory(provider: MemoryProvider): this
provider
MemoryProvider
required
Memory provider instance (e.g., from @agentlib/memory)
Example:
import { inMemoryStorage } from '@agentlib/memory'

const agent = createAgent({ name: 'stateful-agent' })
  .memory(inMemoryStorage())

policy()

Set or update execution policy constraints.
policy(policy: AgentPolicy): this
policy
AgentPolicy
required
Example:
const agent = createAgent({ name: 'safe-agent' })
  .policy({
    maxSteps: 10,
    timeout: 30000,
    allowedTools: ['search', 'calculate']
  })

reasoning()

Set the reasoning engine or strategy.
reasoning(engine: ReasoningEngine<TData> | ReasoningStrategy): this
engine
ReasoningEngine<TData> | ReasoningStrategy
required
Either a strategy name (‘react’ | ‘planner’ | ‘cot’ | ‘reflect’ | ‘autonomous’) or a custom engine instance
Example:
// Using a built-in strategy
const agent = createAgent({ name: 'react-agent' })
  .reasoning('react')

// Using a custom engine
const customEngine = {
  name: 'custom',
  async execute(rCtx) {
    // Custom reasoning logic
    return 'result'
  }
}

const customAgent = createAgent({ name: 'custom-agent' })
  .reasoning(customEngine)

set() / get()

Store and retrieve arbitrary key-value data.
set(key: string, value: unknown): this
get(key: string): unknown
Example:
const agent = createAgent({ name: 'stateful' })
  .set('apiKey', process.env.API_KEY)
  .set('retryCount', 3)

const apiKey = agent.get('apiKey')

Event Handling

on()

Subscribe to agent lifecycle events.
on<K extends keyof AgentEventMap>(
  event: K,
  handler: EventHandler<AgentEventMap[K]>
): this
event
CoreEvent
required
Event name:
  • run:start - Execution started
  • run:end - Execution completed
  • step:start - Reasoning step started
  • step:end - Reasoning step ended
  • step:reasoning - Reasoning step emitted
  • model:request - LLM request sent
  • model:response - LLM response received
  • tool:before - Tool about to execute
  • tool:after - Tool execution completed
  • memory:read - Memory loaded
  • memory:write - Memory persisted
  • cancel - Execution cancelled
  • error - Error occurred
handler
EventHandler<TPayload>
required
Callback function receiving event payload
Example:
const agent = createAgent({ name: 'observable' })
  .on('run:start', ({ input, sessionId }) => {
    console.log(`Starting run for session ${sessionId}: ${input}`)
  })
  .on('tool:before', ({ name, args }) => {
    console.log(`Calling tool ${name} with:`, args)
  })
  .on('run:end', ({ output, state }) => {
    console.log(`Completed with output: ${output}`)
    console.log(`Total tokens used: ${state.usage.totalTokens}`)
  })
  .on('error', (error) => {
    console.error('Agent error:', error)
  })

Execution Methods

run()

Execute the agent with the given input.
run(options: RunOptions<TData> | string): Promise<RunResult>
options
RunOptions<TData> | string
required
Either a string input or a full options object:
Returns:
result
RunResult
Examples:
// Simple string input
const result = await agent.run('What is 2+2?')
console.log(result.output)

// Full options
const result = await agent.run({
  input: 'Analyze this data',
  data: { userId: 'user-123' },
  sessionId: 'session-abc'
})

// With abort signal
const controller = new AbortController()
setTimeout(() => controller.abort(), 5000)

try {
  const result = await agent.run({
    input: 'Long running task',
    signal: controller.signal
  })
} catch (error) {
  console.log('Aborted')
}

cancel()

Cancel ongoing execution(s).
cancel(reason?: string): void
reason
string
Cancellation reason
Example:
const agent = createAgent({ name: 'cancellable' })

// Start long-running task
const promise = agent.run('Complex analysis')

// Cancel after 5 seconds
setTimeout(() => {
  agent.cancel('Timeout exceeded')
}, 5000)

try {
  await promise
} catch (error) {
  console.log('Cancelled:', error)
}

Complete Example

import { createAgent, defineTool } from '@agentlib/core'
import { openai } from '@agentlib/openai'
import { inMemoryStorage } from '@agentlib/memory'

interface AgentData {
  userId: string
  context: Record<string, any>
}

// Define tools
const fetchData = defineTool<AgentData>({
  schema: {
    name: 'fetchData',
    description: 'Fetch user-specific data',
    parameters: {
      type: 'object',
      properties: {
        key: { type: 'string' }
      },
      required: ['key']
    }
  },
  execute: async (args, ctx) => {
    return ctx.data.context[args.key as string]
  }
})

// Create and configure agent
const agent = createAgent<AgentData>({
  name: 'data-assistant',
  description: 'Helps users analyze their data',
  systemPrompt: 'You are a helpful data analysis assistant.',
  data: {
    userId: 'default',
    context: {}
  }
})
  .provider(openai({ model: 'gpt-4' }))
  .tool(fetchData)
  .memory(inMemoryStorage())
  .reasoning('react')
  .policy({ maxSteps: 10, timeout: 60000 })
  .on('tool:before', ({ name, args }) => {
    console.log(`Executing ${name}:`, args)
  })
  .on('run:end', ({ state }) => {
    console.log(`Tokens used: ${state.usage.totalTokens}`)
  })

// Execute
const result = await agent.run({
  input: 'What data do I have available?',
  data: {
    userId: 'user-456',
    context: { sales: 1000, revenue: 50000 }
  },
  sessionId: 'session-123'
})

console.log(result.output)

See Also

Build docs developers (and LLMs) love