Tools
Tools enable agents to interact with external systems, perform computations, access data, and execute actions beyond text generation. ADK-TS provides a robust tool system with automatic schema generation, validation, error handling, and seamless LLM integration.
All tools extend from BaseTool, which provides the foundation for tool implementation:
import { BaseTool, ToolContext, FunctionDeclaration } from "@iqai/adk";
export abstract class BaseTool {
/**
* Name of the tool (must be alphanumeric with underscores only)
*/
name: string;
/**
* Description of what the tool does
*/
description: string;
/**
* Whether the tool is a long-running operation
*/
isLongRunning: boolean;
/**
* Whether to retry on failure
*/
shouldRetryOnFailure: boolean;
/**
* Maximum retry attempts (default: 3)
*/
maxRetryAttempts: number;
/**
* Returns the function declaration for the LLM
*/
abstract getDeclaration(): FunctionDeclaration;
/**
* Executes the tool with given arguments
*/
abstract runAsync(
args: Record<string, any>,
context: ToolContext,
): Promise<any>;
}
Here’s a complete example of creating a custom tool:
import { BaseTool, ToolContext, FunctionDeclaration } from "@iqai/adk";
export class WeatherTool extends BaseTool {
constructor() {
super({
name: "get_weather",
description: "Get current weather information for a location",
shouldRetryOnFailure: true,
maxRetryAttempts: 3,
});
}
getDeclaration(): FunctionDeclaration {
return {
name: this.name,
description: this.description,
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City name or coordinates (lat,lon)",
},
units: {
type: "string",
description: "Temperature units",
enum: ["celsius", "fahrenheit"],
},
},
required: ["location"],
},
};
}
async runAsync(
args: Record<string, any>,
context: ToolContext,
): Promise<any> {
const { location, units = "celsius" } = args;
// Access session state
const userPreferences = context.state.get("preferences");
// Call external API
const response = await fetch(
`https://api.weather.com/data?location=${location}&units=${units}`
);
const data = await response.json();
return {
location,
temperature: data.temp,
condition: data.condition,
units,
};
}
}
Function Declaration Schema
The FunctionDeclaration uses JSON Schema for parameter validation:
interface FunctionDeclaration {
/** Function name (alphanumeric and underscores only) */
name: string;
/** Description of what the function does */
description: string;
/** JSON Schema for parameters */
parameters: {
type: "object";
properties: {
[key: string]: {
type: "string" | "number" | "boolean" | "array" | "object";
description: string;
enum?: string[];
items?: any;
properties?: any;
};
};
required?: string[];
};
}
Use detailed descriptions in your schema. LLMs use these to understand when and how to call your tools.
ToolContext
The ToolContext provides tools with access to session data and services:
interface ToolContext {
/** Current session */
session: Session;
/** Session state management */
state: State;
/** Current agent */
agent: BaseAgent;
/** Memory service (if configured) */
memoryService?: MemoryService;
/** Artifact service (if configured) */
artifactService?: BaseArtifactService;
/** Current function call details */
functionCall?: {
name: string;
args: Record<string, any>;
};
}
Using ToolContext
class DataStorageTool extends BaseTool {
async runAsync(args: Record<string, any>, context: ToolContext) {
// Read from state
const existingData = context.state.get("user_data", []);
// Update state
context.state.set("user_data", [...existingData, args.newData]);
// Access memory
if (context.memoryService) {
const memories = await context.memoryService.search({
query: args.searchQuery,
userId: context.session.userId,
appName: context.session.appName,
});
}
// Save artifact
if (context.artifactService) {
await context.artifactService.saveArtifact({
appName: context.session.appName,
userId: context.session.userId,
sessionId: context.session.id,
filename: "data.json",
artifact: { text: JSON.stringify(args.newData) },
});
}
return { success: true };
}
}
ADK-TS includes several built-in tools:
Make HTTP requests:
import { HttpRequestTool } from "@iqai/adk";
const agent = new LlmAgent({
name: "api_agent",
model: "gpt-4o",
tools: [new HttpRequestTool()],
});
// Agent can now make HTTP requests
// "Fetch the latest posts from https://api.example.com/posts"
Read and write files:
import { FileOperationsTool } from "@iqai/adk";
const agent = new LlmAgent({
name: "file_agent",
model: "gpt-4o",
tools: [new FileOperationsTool()],
});
// Agent can read/write files
// "Read the contents of config.json"
Request input from users:
import { UserInteractionTool } from "@iqai/adk";
const agent = new LlmAgent({
name: "interactive_agent",
model: "gpt-4o",
tools: [new UserInteractionTool()],
});
// Agent can ask clarifying questions
Convert regular functions into tools automatically:
import { FunctionTool } from "@iqai/adk";
import { z } from "zod";
// Define function with Zod schema
const calculateDistance = new FunctionTool({
name: "calculate_distance",
description: "Calculate distance between two points",
inputSchema: z.object({
lat1: z.number().describe("Latitude of first point"),
lon1: z.number().describe("Longitude of first point"),
lat2: z.number().describe("Latitude of second point"),
lon2: z.number().describe("Longitude of second point"),
}),
execute: async ({ lat1, lon1, lat2, lon2 }) => {
const R = 6371; // Earth's radius in km
const dLat = (lat2 - lat1) * Math.PI / 180;
const dLon = (lon2 - lon1) * Math.PI / 180;
const a = Math.sin(dLat/2) * Math.sin(dLat/2) +
Math.cos(lat1 * Math.PI / 180) * Math.cos(lat2 * Math.PI / 180) *
Math.sin(dLon/2) * Math.sin(dLon/2);
const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
return { distance: R * c, unit: "km" };
},
});
const agent = new LlmAgent({
name: "geo_agent",
model: "gpt-4o",
tools: [calculateDistance],
});
MCP (Model Context Protocol)
Integrate external MCP servers as tools:
import { MCPClient } from "@iqai/adk";
const mcpClient = new MCPClient({
serverPath: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"],
});
await mcpClient.connect();
const agent = new LlmAgent({
name: "mcp_agent",
model: "gpt-4o",
tools: await mcpClient.getTools(),
});
MCP enables integration with external tools like filesystem access, database queries, API clients, and more.
Add hooks before and after tool execution:
const agent = new LlmAgent({
name: "monitored_agent",
model: "gpt-4o",
tools: [weatherTool, searchTool],
// Called before tool execution
beforeToolCallback: async (tool, args, context) => {
console.log(`Executing tool: ${tool.name}`);
console.log(`Arguments:`, args);
// Validate or modify arguments
if (args.apiKey) {
delete args.apiKey; // Remove sensitive data from logs
}
return args; // Return modified args
},
// Called after tool execution
afterToolCallback: async (tool, args, context, response) => {
console.log(`Tool ${tool.name} completed`);
console.log(`Response:`, response);
// Log to analytics
await logToolUsage(tool.name, context.session.userId);
return response; // Can modify response
},
});
Error Handling
Tools can throw errors that the agent will handle:
class ApiTool extends BaseTool {
constructor() {
super({
name: "api_call",
description: "Call external API",
shouldRetryOnFailure: true,
maxRetryAttempts: 3,
});
}
async runAsync(args: Record<string, any>, context: ToolContext) {
try {
const response = await fetch(args.url);
if (!response.ok) {
throw new Error(`API returned ${response.status}`);
}
return await response.json();
} catch (error) {
// Error will be retried up to maxRetryAttempts
throw new Error(`API call failed: ${error.message}`);
}
}
}
Tools with shouldRetryOnFailure: true will automatically retry with exponential backoff on failure.
For operations that take significant time:
class ProcessingTool extends BaseTool {
constructor() {
super({
name: "process_data",
description: "Process large dataset",
isLongRunning: true, // Marks tool as long-running
});
}
async runAsync(args: Record<string, any>, context: ToolContext) {
// Start async processing
const jobId = await startProcessingJob(args.data);
// Return job ID immediately
return {
jobId,
status: "processing",
message: "Processing started. Use check_status to monitor progress.",
};
}
}
Organize tools by domain or functionality:
// Database tools
const dbTools = [
new QueryTool(),
new InsertTool(),
new UpdateTool(),
];
// API tools
const apiTools = [
new RestApiTool(),
new GraphQLTool(),
];
// File tools
const fileTools = [
new FileOperationsTool(),
new ImageProcessingTool(),
];
const agent = new LlmAgent({
name: "full_stack_agent",
model: "gpt-4o",
tools: [...dbTools, ...apiTools, ...fileTools],
});
Best Practices
-
Clear Naming: Use descriptive, action-oriented names (
get_weather, not weather).
-
Detailed Descriptions: Explain what the tool does, when to use it, and what it returns.
-
Parameter Validation: Use JSON Schema to validate inputs thoroughly.
-
Error Messages: Return clear, actionable error messages that help the agent retry or adjust.
-
State Management: Use
ToolContext.state for conversation-scoped data.
-
Idempotency: Design tools to be safe when called multiple times with the same arguments.
-
Security: Validate and sanitize all inputs, especially for file operations and API calls.
- Agents - Learn how agents use tools
- Sessions - Understand session state available to tools
- Flows - See how tool calls fit into the processing pipeline