Skip to main content

Basic Agent Example

This example demonstrates the fundamental concepts of AgentLIB: creating an agent, adding tools, configuring providers, and running the agent.

What You’ll Learn

  • How to create a basic agent
  • How to define and register tools
  • How to configure an LLM provider (OpenAI)
  • How to add memory and middleware
  • How to handle agent events
  • How to run the agent with input

Complete Code

import 'dotenv/config'

import { createAgent, defineTool } from '@agentlib/core'
import { openai } from '@agentlib/openai'
import { createLogger } from '@agentlib/logger'
import { BufferMemory } from '@agentlib/memory'

// ─── 1. Define Tools ──────────────────────────────────────────────────────────

interface WeatherResult {
    location: string
    temperature: number
    condition: string
}

const getWeatherTool = defineTool({
    schema: {
        name: 'get_weather',
        description: 'Get the current weather for a location.',
        parameters: {
            type: 'object',
            properties: {
                location: { type: 'string', description: 'The city to get weather for.' },
            },
            required: ['location'],
        },
    },
    async execute({ location }): Promise<WeatherResult> {
        // In production: call a real weather API
        return { location: String(location), temperature: 22, condition: 'sunny' }
    },
})

// ─── 2. Typed Agent ───────────────────────────────────────────────────────────

interface AppData {
    userId: string
    plan: 'free' | 'pro'
}

const agent = createAgent<AppData>({
    name: 'assistant',
    systemPrompt: 'You are a helpful assistant. Use tools when appropriate.',
    data: { userId: 'default', plan: 'free' },
    policy: {
        maxSteps: 10,
        tokenBudget: 10_000,
    },
})
    // Provider
    .provider(openai({ apiKey: process.env['OPENAI_API_KEY'] ?? '', model: process.env['OPENAI_MODEL'] ?? 'gpt-4o', baseURL: process.env['OPENAI_BASE_URL'] ?? 'https://api.openai.com/v1' }))
    // Memory
    .memory(new BufferMemory({ maxMessages: 20 }))
    // Tools
    .tool(getWeatherTool)
    // Middleware
    .use(
        createLogger({
            level: 'debug',
            timing: true,
            prefix: '[weather-agent]',
        }),
    )
    // Custom middleware: enforce plan limits
    .use({
        name: 'plan-guard',
        scope: 'run:before',
        async run(mCtx, next) {
            if (mCtx.ctx.data.plan === 'free' && mCtx.ctx.input.length > 500) {
                throw new Error('Input too long for free plan.')
            }
            await next()
        },
    })

// ─── 3. Event Listeners ───────────────────────────────────────────────────────

agent.on('run:start', ({ input }: { input: string }) => {
    console.log(`[run:start] input="${input}"`)
})

agent.on('tool:after', ({ tool, result }: { tool: string; result: unknown }) => {
    console.log(`[tool:after] ${tool} →`, result)
})

agent.on('run:end', ({ output }: { output: string }) => {
    console.log(`[run:end] output="${output}"`)
})

// ─── 4. Run ───────────────────────────────────────────────────────────────────

async function main() {
    const result = await agent.run({
        input: 'What is the weather in Buenos Aires and Tokyo?',
        data: { userId: 'user-123', plan: 'pro' },
    })

    console.log('\nFinal response:')
    console.log(result.output)
    console.log('\nToken usage:', result.state.usage)
    console.log('Steps taken:', result.state.steps.length)
}

main().catch(console.error)

Code Breakdown

1. Define Tools

Tools are the actions your agent can perform. Use defineTool() to create a tool with a schema and execution function:
const getWeatherTool = defineTool({
    schema: {
        name: 'get_weather',
        description: 'Get the current weather for a location.',
        parameters: {
            type: 'object',
            properties: {
                location: { type: 'string', description: 'The city to get weather for.' },
            },
            required: ['location'],
        },
    },
    async execute({ location }): Promise<WeatherResult> {
        return { location: String(location), temperature: 22, condition: 'sunny' }
    },
})
The schema follows the JSON Schema format and tells the LLM when and how to use the tool. The execute function contains your tool’s logic.

2. Create the Agent

Create an agent with configuration options:
const agent = createAgent<AppData>({
    name: 'assistant',
    systemPrompt: 'You are a helpful assistant. Use tools when appropriate.',
    data: { userId: 'default', plan: 'free' },
    policy: {
        maxSteps: 10,
        tokenBudget: 10_000,
    },
})
  • name: Identifies your agent
  • systemPrompt: Instructions for the LLM
  • data: Custom typed data available throughout the agent lifecycle
  • policy: Limits to prevent runaway execution

3. Configure the Agent

Chain methods to add capabilities:
agent
    .provider(openai({ apiKey: process.env['OPENAI_API_KEY'], model: 'gpt-4o' }))
    .memory(new BufferMemory({ maxMessages: 20 }))
    .tool(getWeatherTool)
    .use(createLogger({ level: 'debug' }))
  • provider: The LLM backend (OpenAI, Anthropic, etc.)
  • memory: Stores conversation history
  • tool: Registers tools the agent can use
  • use: Adds middleware for logging, guardrails, etc.

4. Add Event Listeners

Listen to agent lifecycle events:
agent.on('run:start', ({ input }) => {
    console.log(`[run:start] input="${input}"`)
})

agent.on('tool:after', ({ tool, result }) => {
    console.log(`[tool:after] ${tool} →`, result)
})
Available events include run:start, run:end, tool:before, tool:after, and more.

5. Run the Agent

Execute the agent with input:
const result = await agent.run({
    input: 'What is the weather in Buenos Aires and Tokyo?',
    data: { userId: 'user-123', plan: 'pro' },
})

console.log(result.output)        // The agent's response
console.log(result.state.usage)   // Token usage stats
console.log(result.state.steps)   // Steps taken during execution
The agent will:
  1. Receive the input
  2. Decide which tools to call (if any)
  3. Execute the tools
  4. Generate a final response

Next Steps

Build docs developers (and LLMs) love