Skip to main content

Tool Calling Example

Tools are the primary way agents interact with the world. This example shows different approaches to defining and using tools in AgentLIB.

What You’ll Learn

  • How to define tools using defineTool()
  • How to use decorator-based tool definitions
  • How to handle tool parameters and types
  • How agents automatically decide when to call tools

Functional Approach

The most common way to define tools is with the defineTool() function:
import { createAgent, defineTool } from '@agentlib/core'
import { openai } from '@agentlib/openai'

// Define a tool with schema and execution logic
const getWeatherTool = defineTool({
    schema: {
        name: 'get_weather',
        description: 'Get the current weather for a location.',
        parameters: {
            type: 'object',
            properties: {
                location: { type: 'string', description: 'The city to get weather for.' },
            },
            required: ['location'],
        },
    },
    async execute({ location }) {
        // In production: call a real weather API
        return { 
            location: String(location), 
            temperature: 22, 
            condition: 'sunny' 
        }
    },
})

// Create agent and register the tool
const agent = createAgent({
    name: 'weather-assistant',
    systemPrompt: 'You are a helpful weather assistant. Use tools when appropriate.',
})
    .provider(openai({ 
        apiKey: process.env['OPENAI_API_KEY'], 
        model: 'gpt-4o' 
    }))
    .tool(getWeatherTool) // Register the tool

// Run the agent
const result = await agent.run({
    input: 'What is the weather in Paris?',
})

console.log(result.output)
// The agent will call get_weather with { location: "Paris" }
// and respond with the weather information

Tool Schema

The schema follows JSON Schema format:
schema: {
    name: 'get_weather',           // Tool identifier
    description: 'Get the current weather for a location.', // Helps LLM decide when to use it
    parameters: {                  // JSON Schema for parameters
        type: 'object',
        properties: {
            location: { 
                type: 'string', 
                description: 'The city to get weather for.' 
            },
        },
        required: ['location'],    // Required parameters
    },
}
Best Practices:
  • Use clear, descriptive names
  • Write detailed descriptions (helps the LLM choose the right tool)
  • Specify types accurately
  • Mark required parameters

Execute Function

The execute function receives the parameters and returns the tool result:
async execute({ location }) {
    // Type-safe parameters based on your schema
    const weatherData = await fetchWeatherAPI(location)
    return weatherData // Return any JSON-serializable data
}
The LLM receives the return value and uses it to formulate its response.

Class-Based Approach

For more structured agents, use TypeScript decorators:
import { createAgent, Agent, Tool, Arg } from '@agentlib/core'
import { openai } from '@agentlib/openai'
import 'reflect-metadata'

@Agent({
    name: "weather-assistant",
    systemPrompt: "You are a helpful assistant with specialized tools."
})
class MyAgent {
    @Tool({
        name: "weather",
        description: "Get the weather in a city."
    })
    async weather(
        @Arg({ name: "city", description: "The city" }) city: string
    ) {
        // Call your weather API
        return { city, temperature: "25°C", condition: "sunny" }
    }

    @Tool({
        name: "calculator",
        description: "Perform basic math operations."
    })
    async calculate(
        @Arg({ name: "expression", description: "Math expression to evaluate" }) expression: string
    ) {
        // In production: use a safe math evaluator
        return { result: eval(expression) }
    }
}

// Create agent from class
const agent = createAgent(MyAgent)
    .provider(openai({ 
        apiKey: process.env['OPENAI_API_KEY'], 
        model: 'gpt-4o' 
    }))

const result = await agent.run({
    input: 'What is the weather in London and what is 15 * 23?',
})
// The agent will call both tools automatically

Benefits of Class-Based Tools

  • Type Safety: Parameters are strongly typed
  • Organization: Group related tools in a class
  • Reusability: Share state across tools via class properties
  • Clean Syntax: Decorators make tool definitions concise

Multiple Tools Example

Agents can use multiple tools in a single run:
const searchTool = defineTool({
    schema: {
        name: 'search',
        description: 'Search for information on the web.',
        parameters: {
            type: 'object',
            properties: {
                query: { type: 'string', description: 'Search query' },
            },
            required: ['query'],
        },
    },
    async execute({ query }) {
        return { results: [`Result for: ${query}`] }
    },
})

const calculatorTool = defineTool({
    schema: {
        name: 'calculator',
        description: 'Perform mathematical calculations.',
        parameters: {
            type: 'object',
            properties: {
                expression: { type: 'string', description: 'Math expression' },
            },
            required: ['expression'],
        },
    },
    async execute({ expression }) {
        // Use a safe math evaluator in production
        return { result: eval(expression) }
    },
})

const agent = createAgent({
    name: 'multi-tool-agent',
    systemPrompt: 'You are a helpful assistant. Use tools when needed.',
})
    .provider(openai({ apiKey: process.env['OPENAI_API_KEY'], model: 'gpt-4o' }))
    .tool(searchTool)
    .tool(calculatorTool)

const result = await agent.run({
    input: 'Search for the population of Tokyo and calculate 12345 * 6789',
})
// The agent will intelligently call both tools

Monitoring Tool Calls

Track tool usage with event listeners:
agent.on('tool:before', ({ tool, args }) => {
    console.log(`Calling ${tool} with:`, args)
})

agent.on('tool:after', ({ tool, result }) => {
    console.log(`${tool} returned:`, result)
})

agent.on('tool:error', ({ tool, error }) => {
    console.error(`${tool} failed:`, error)
})

Tool Execution Flow

When you run an agent:
  1. Agent receives input: User message is processed
  2. LLM decides: Based on the input and available tools, the LLM decides whether to call a tool
  3. Tool execution: If a tool is called, the execute() function runs
  4. Result processing: The tool result is sent back to the LLM
  5. Response generation: The LLM uses the tool result to formulate a response
  6. Loop: Steps 2-5 may repeat if the LLM needs to call more tools

Next Steps

Build docs developers (and LLMs) love