Skip to main content

Overview

The LlmAgent class is the primary agent type for building LLM-powered conversational agents. It extends BaseAgent and provides comprehensive support for LLM models, tools, memory, code execution, schemas, and lifecycle callbacks. Location: @iqai/adk package, exported from packages/adk/src/agents/llm-agent.ts:269 Type Alias: Agent is an alias for LlmAgent

Constructor

config
LlmAgentConfig<T>
required
Configuration object for the LLM agent
import { LlmAgent } from "@iqai/adk";

const agent = new LlmAgent({
  name: "assistant",
  description: "A helpful AI assistant",
  model: "gemini-2.5-flash",
  instruction: "You are a helpful assistant that provides accurate information."
});
Location: packages/adk/src/agents/llm-agent.ts:393

Configuration Properties

Basic Configuration

config.name
string
required
Name of the agent. Must be a valid identifier.
config.description
string
required
Description of the agent’s capabilities. Used by parent agents for delegation.
config.model
string | BaseLlm | LanguageModel
The LLM model to use. Can be a model name (e.g., “gpt-4”, “gemini-2.5-flash”), a BaseLlm instance, or a LanguageModel from Vercel AI SDK. If not set, inherits from ancestor agents.
config.instruction
string | InstructionProvider
System instruction guiding the agent’s behavior. Can be a static string or a function that returns the instruction based on context.
config.globalInstruction
string | InstructionProvider
Global instruction for the entire agent tree. Only takes effect on the root agent.

Tools and Execution

config.tools
ToolUnion[]
Array of tools available to the agent. Can include BaseTool instances or plain functions (automatically converted to FunctionTool).
config.codeExecutor
BaseCodeExecutor
Code executor for running code snippets.
config.planner
BasePlanner
Planner for step-by-step task execution.

Multi-Agent Configuration

config.subAgents
BaseAgent[]
Sub-agents that this agent can delegate to.
config.disallowTransferToParent
boolean
default:"false"
Prevents LLM-controlled transfers to the parent agent.
config.disallowTransferToPeers
boolean
default:"false"
Prevents LLM-controlled transfers to peer agents.

Schema Configuration

config.inputSchema
ZodSchema
Zod schema for input validation when agent is used as a tool.
config.outputSchema
ZodSchema
Zod schema for output validation. When set, the agent’s response will be validated against this schema.
When outputSchema is set with tools or transfers enabled, the schema is applied during response post-processing to preserve tool-calling and transfer capabilities.

State and Services

config.outputKey
string
Key in session state to store the agent’s output.
config.includeContents
'default' | 'none'
default:"'default'"
Controls whether to include conversation history in model requests.
config.memoryService
MemoryService
Memory service for long-term storage and retrieval.
config.sessionService
BaseSessionService
Session service for managing conversations.
config.artifactService
BaseArtifactService
Artifact service for file storage and management.
config.userId
string
User ID for the session.
config.appName
string
Application name.

Advanced Configuration

config.generateContentConfig
GenerateContentConfig
Additional Google Gemini content generation configurations. Note: some fields like tools must be configured via the tools parameter.
config.plugins
BasePlugin[]
Plugins for extending agent behavior with lifecycle hooks.

Callbacks

config.beforeAgentCallback
BeforeAgentCallback
Callback(s) invoked before agent execution.
config.afterAgentCallback
AfterAgentCallback
Callback(s) invoked after agent execution.
config.beforeModelCallback
BeforeModelCallback
Callback(s) invoked before calling the LLM.
config.afterModelCallback
AfterModelCallback
Callback(s) invoked after receiving LLM response.
config.beforeToolCallback
BeforeToolCallback
Callback(s) invoked before executing a tool.
config.afterToolCallback
AfterToolCallback
Callback(s) invoked after executing a tool.

Properties

model

model
string | BaseLlm | LanguageModel
The LLM model instance or identifier.

instruction

instruction
string | InstructionProvider
System instruction for the agent.

globalInstruction

globalInstruction
string | InstructionProvider
Global instruction for the entire agent tree.

tools

tools
ToolUnion[]
Tools available to the agent.

codeExecutor

codeExecutor
BaseCodeExecutor | undefined
Code executor instance.

planner

planner
BasePlanner | undefined
Planner instance for step-by-step execution.

inputSchema / outputSchema

inputSchema
ZodSchema | undefined
Input validation schema.
outputSchema
ZodSchema | undefined
Output validation schema.

outputKey

outputKey
string | undefined
Session state key for storing output.

plugins

plugins
BasePlugin[] | undefined
Active plugins on this agent.

Methods

canonicalModel (getter)

Resolves the model to a BaseLlm instance. If the agent doesn’t have a model set, it searches up the parent chain. Returns: BaseLlm Throws: Error if no model is found in the agent hierarchy.
const llm = agent.canonicalModel;
// Returns BaseLlm instance
Location: packages/adk/src/agents/llm-agent.ts:434

canonicalInstruction()

Resolves the instruction to a string, executing the function if it’s an InstructionProvider.
ctx
ReadonlyContext
required
The readonly context
Returns: Promise<[string, boolean]> Returns a tuple of [instruction, isDynamic] where isDynamic indicates if the instruction was computed from a function.
const [instruction, isDynamic] = await agent.canonicalInstruction(ctx);
Location: packages/adk/src/agents/llm-agent.ts:466

canonicalGlobalInstruction()

Resolves the global instruction to a string.
ctx
ReadonlyContext
required
The readonly context
Returns: Promise<[string, boolean]>
const [globalInst, isDynamic] = await agent.canonicalGlobalInstruction(ctx);
Location: packages/adk/src/agents/llm-agent.ts:479

canonicalTools()

Resolves tools to an array of BaseTool instances, converting functions to FunctionTool.
ctx
ReadonlyContext
Optional readonly context
Returns: Promise<BaseTool[]>
const tools = await agent.canonicalTools(ctx);
Location: packages/adk/src/agents/llm-agent.ts:494

canonicalBeforeModelCallbacks (getter)

Gets the before model callbacks as an array. Returns: SingleBeforeModelCallback[] Location: packages/adk/src/agents/llm-agent.ts:514

canonicalAfterModelCallbacks (getter)

Gets the after model callbacks as an array. Returns: SingleAfterModelCallback[] Location: packages/adk/src/agents/llm-agent.ts:527

canonicalBeforeToolCallbacks (getter)

Gets the before tool callbacks as an array. Returns: SingleBeforeToolCallback[] Location: packages/adk/src/agents/llm-agent.ts:540

canonicalAfterToolCallbacks (getter)

Gets the after tool callbacks as an array. Returns: SingleAfterToolCallback[] Location: packages/adk/src/agents/llm-agent.ts:553

Callback Types

BeforeModelCallback

Invoked before calling the LLM.
type SingleBeforeModelCallback = (args: {
  callbackContext: CallbackContext;
  llmRequest: LlmRequest;
}) => LlmResponse | null | undefined | Promise<LlmResponse | null | undefined>;

type BeforeModelCallback = SingleBeforeModelCallback | SingleBeforeModelCallback[];
Returns:
  • LlmResponse - Override the LLM call with this response
  • null or undefined - Continue with LLM call
const agent = new LlmAgent({
  name: "cached_agent",
  model: "gpt-4",
  beforeModelCallback: async ({ llmRequest, callbackContext }) => {
    // Check cache before calling LLM
    const cached = await cache.get(llmRequest);
    if (cached) {
      return cached; // Skip LLM call
    }
    return null; // Proceed with LLM call
  }
});

AfterModelCallback

Invoked after receiving LLM response.
type SingleAfterModelCallback = (args: {
  callbackContext: CallbackContext;
  llmResponse: LlmResponse;
}) => LlmResponse | null | undefined | Promise<LlmResponse | null | undefined>;

type AfterModelCallback = SingleAfterModelCallback | SingleAfterModelCallback[];
Returns:
  • LlmResponse - Replace the LLM response with this
  • null or undefined - Use original response
const agent = new LlmAgent({
  name: "filtered_agent",
  model: "gpt-4",
  afterModelCallback: async ({ llmResponse }) => {
    // Filter or modify response
    const filtered = filterContent(llmResponse);
    return filtered;
  }
});

BeforeToolCallback

Invoked before executing a tool.
type SingleBeforeToolCallback = (
  tool: BaseTool,
  args: Record<string, any>,
  toolContext: ToolContext
) => Record<string, any> | null | undefined | Promise<Record<string, any> | null | undefined>;

type BeforeToolCallback = SingleBeforeToolCallback | SingleBeforeToolCallback[];
Returns:
  • Record<string, any> - Modified tool arguments
  • null or undefined - Use original arguments
const agent = new LlmAgent({
  name: "validated_agent",
  model: "gpt-4",
  tools: [apiTool],
  beforeToolCallback: async (tool, args, ctx) => {
    // Validate or modify tool arguments
    console.log(`Calling ${tool.name} with:`, args);
    
    if (tool.name === "api_call") {
      // Add authentication
      return { ...args, apiKey: process.env.API_KEY };
    }
    return null;
  }
});

AfterToolCallback

Invoked after executing a tool.
type SingleAfterToolCallback = (
  tool: BaseTool,
  args: Record<string, any>,
  toolContext: ToolContext,
  toolResponse: Record<string, any>
) => Record<string, any> | null | undefined | Promise<Record<string, any> | null | undefined>;

type AfterToolCallback = SingleAfterToolCallback | SingleAfterToolCallback[];
Returns:
  • Record<string, any> - Modified tool response
  • null or undefined - Use original response
const agent = new LlmAgent({
  name: "logging_agent",
  model: "gpt-4",
  tools: [searchTool],
  afterToolCallback: async (tool, args, ctx, response) => {
    // Log or modify tool response
    console.log(`${tool.name} returned:`, response);
    
    // Transform response if needed
    if (tool.name === "search") {
      return { ...response, timestamp: Date.now() };
    }
    return null;
  }
});

Dynamic Instructions

Instructions can be dynamic functions that generate content based on context:
import { LlmAgent, type ReadonlyContext } from "@iqai/adk";

const agent = new LlmAgent({
  name: "contextual_agent",
  model: "gpt-4",
  instruction: async (ctx: ReadonlyContext) => {
    const userPreferences = ctx.session.state.preferences;
    const timeOfDay = new Date().getHours();
    
    return `You are a helpful assistant.
      User preferences: ${JSON.stringify(userPreferences)}
      Current time: ${timeOfDay < 12 ? 'morning' : 'afternoon'}
      Adjust your responses accordingly.`;
  }
});

Tool Configuration

Tools can be BaseTool instances or plain functions:
import { createTool } from "@iqai/adk";

const weatherTool = createTool({
  name: "get_weather",
  description: "Get current weather for a location",
  fn: async (location: string) => {
    const response = await fetch(`/api/weather?loc=${location}`);
    return response.json();
  }
});

const agent = new LlmAgent({
  name: "weather_agent",
  model: "gpt-4",
  tools: [weatherTool]
});

Output Schema Validation

Use Zod schemas for structured, type-safe outputs:
import { LlmAgent } from "@iqai/adk";
import { z } from "zod";

const outputSchema = z.object({
  capital: z.string().describe("The capital city"),
  country: z.string().describe("The country name"),
  population: z.number().optional().describe("Population"),
  funFact: z.string().describe("An interesting fact")
});

const agent = new LlmAgent({
  name: "geo_agent",
  model: "gemini-2.5-flash",
  outputSchema
});

// Output will be validated and parsed
const result = await runner.ask("Tell me about Paris");
// result type: { capital: string; country: string; population?: number; funFact: string }
Location (validation logic): packages/adk/src/agents/llm-agent.ts:619

Multi-Agent Workflows

import { LlmAgent } from "@iqai/adk";

const researcher = new LlmAgent({
  name: "researcher",
  description: "Researches topics and gathers information",
  model: "gpt-4",
  tools: [searchTool, scrapeTool]
});

const analyst = new LlmAgent({
  name: "analyst",
  description: "Analyzes data and draws insights",
  model: "gpt-4"
});

const writer = new LlmAgent({
  name: "writer",
  description: "Writes polished reports",
  model: "gpt-4"
});

const coordinator = new LlmAgent({
  name: "coordinator",
  description: "Coordinates the research workflow",
  model: "gpt-4",
  subAgents: [researcher, analyst, writer],
  instruction: "Coordinate the research team to produce comprehensive reports."
});

Complete Example

import { 
  LlmAgent, 
  createTool, 
  MemoryService, 
  InMemoryStorageProvider 
} from "@iqai/adk";
import { z } from "zod";

// Define tools
const searchTool = createTool({
  name: "search",
  description: "Search for information online",
  fn: async (query: string) => {
    // Search implementation
    return { results: [] };
  }
});

// Define output schema
const outputSchema = z.object({
  answer: z.string(),
  sources: z.array(z.string()),
  confidence: z.number().min(0).max(1)
});

// Create memory service
const memoryService = new MemoryService({
  storage: new InMemoryStorageProvider()
});

// Create agent
const agent = new LlmAgent({
  name: "research_assistant",
  description: "A research assistant that searches and analyzes information",
  model: "gemini-2.5-flash",
  instruction: `You are a thorough research assistant.
    - Search for relevant information
    - Cite your sources
    - Provide confidence scores`,
  tools: [searchTool],
  outputSchema,
  memoryService,
  outputKey: "research_result",
  beforeModelCallback: async ({ llmRequest }) => {
    console.log("Calling LLM with", llmRequest.contents.length, "messages");
    return null;
  },
  afterToolCallback: async (tool, args, ctx, response) => {
    console.log(`${tool.name} completed:`, response);
    return null;
  }
});

// Use with AgentBuilder for easier execution
import { AgentBuilder } from "@iqai/adk";

const { runner, session } = await AgentBuilder
  .withAgent(agent)
  .build();

const result = await runner.ask("What are the latest AI trends?");
console.log(result);
// Type-safe result: { answer: string; sources: string[]; confidence: number }

Type Definitions

type InstructionProvider = (
  ctx: ReadonlyContext
) => string | Promise<string>;

type ToolUnion = BaseTool | ((...args: any[]) => any);

interface LlmAgentConfig<T extends BaseLlm = BaseLlm> {
  name: string;
  description: string;
  model?: string | T | LanguageModel;
  instruction?: string | InstructionProvider;
  globalInstruction?: string | InstructionProvider;
  tools?: ToolUnion[];
  codeExecutor?: BaseCodeExecutor;
  planner?: BasePlanner;
  subAgents?: BaseAgent[];
  disallowTransferToParent?: boolean;
  disallowTransferToPeers?: boolean;
  includeContents?: "default" | "none";
  outputKey?: string;
  inputSchema?: ZodSchema;
  outputSchema?: ZodSchema;
  memoryService?: MemoryService;
  sessionService?: BaseSessionService;
  artifactService?: BaseArtifactService;
  plugins?: BasePlugin[];
  beforeAgentCallback?: BeforeAgentCallback;
  afterAgentCallback?: AfterAgentCallback;
  beforeModelCallback?: BeforeModelCallback;
  afterModelCallback?: AfterModelCallback;
  beforeToolCallback?: BeforeToolCallback;
  afterToolCallback?: AfterToolCallback;
  // ... other fields
}

See Also

Build docs developers (and LLMs) love