Overview
The LlmAgent class is the primary agent type for building LLM-powered conversational agents. It extends BaseAgent and provides comprehensive support for LLM models, tools, memory, code execution, schemas, and lifecycle callbacks.
Location: @iqai/adk package, exported from packages/adk/src/agents/llm-agent.ts:269
Type Alias: Agent is an alias for LlmAgent
Constructor
config
LlmAgentConfig<T>
required
Configuration object for the LLM agent
import { LlmAgent } from "@iqai/adk" ;
const agent = new LlmAgent ({
name: "assistant" ,
description: "A helpful AI assistant" ,
model: "gemini-2.5-flash" ,
instruction: "You are a helpful assistant that provides accurate information."
});
Location: packages/adk/src/agents/llm-agent.ts:393
Configuration Properties
Basic Configuration
Name of the agent. Must be a valid identifier.
Description of the agent’s capabilities. Used by parent agents for delegation.
config.model
string | BaseLlm | LanguageModel
The LLM model to use. Can be a model name (e.g., “gpt-4”, “gemini-2.5-flash”), a BaseLlm instance, or a LanguageModel from Vercel AI SDK. If not set, inherits from ancestor agents.
config.instruction
string | InstructionProvider
System instruction guiding the agent’s behavior. Can be a static string or a function that returns the instruction based on context.
config.globalInstruction
string | InstructionProvider
Global instruction for the entire agent tree. Only takes effect on the root agent.
Array of tools available to the agent. Can include BaseTool instances or plain functions (automatically converted to FunctionTool).
Code executor for running code snippets.
Planner for step-by-step task execution.
Multi-Agent Configuration
Sub-agents that this agent can delegate to.
config.disallowTransferToParent
Prevents LLM-controlled transfers to the parent agent.
config.disallowTransferToPeers
Prevents LLM-controlled transfers to peer agents.
Schema Configuration
Zod schema for input validation when agent is used as a tool.
Zod schema for output validation. When set, the agent’s response will be validated against this schema.
When outputSchema is set with tools or transfers enabled, the schema is applied during response post-processing to preserve tool-calling and transfer capabilities.
State and Services
Key in session state to store the agent’s output.
config.includeContents
'default' | 'none'
default: "'default'"
Controls whether to include conversation history in model requests.
Memory service for long-term storage and retrieval.
Session service for managing conversations.
Artifact service for file storage and management.
Advanced Configuration
config.generateContentConfig
Additional Google Gemini content generation configurations. Note: some fields like tools must be configured via the tools parameter.
Plugins for extending agent behavior with lifecycle hooks.
Callbacks
config.beforeAgentCallback
Callback(s) invoked before agent execution.
config.afterAgentCallback
Callback(s) invoked after agent execution.
config.beforeModelCallback
Callback(s) invoked before calling the LLM.
config.afterModelCallback
Callback(s) invoked after receiving LLM response.
config.beforeToolCallback
Callback(s) invoked before executing a tool.
Callback(s) invoked after executing a tool.
Properties
model
model
string | BaseLlm | LanguageModel
The LLM model instance or identifier.
instruction
instruction
string | InstructionProvider
System instruction for the agent.
globalInstruction
globalInstruction
string | InstructionProvider
Global instruction for the entire agent tree.
Tools available to the agent.
codeExecutor
codeExecutor
BaseCodeExecutor | undefined
Code executor instance.
planner
Planner instance for step-by-step execution.
Output validation schema.
outputKey
Session state key for storing output.
plugins
Active plugins on this agent.
Methods
canonicalModel (getter)
Resolves the model to a BaseLlm instance. If the agent doesn’t have a model set, it searches up the parent chain.
Returns: BaseLlm
Throws: Error if no model is found in the agent hierarchy.
const llm = agent . canonicalModel ;
// Returns BaseLlm instance
Location: packages/adk/src/agents/llm-agent.ts:434
canonicalInstruction()
Resolves the instruction to a string, executing the function if it’s an InstructionProvider.
Returns: Promise<[string, boolean]>
Returns a tuple of [instruction, isDynamic] where isDynamic indicates if the instruction was computed from a function.
const [ instruction , isDynamic ] = await agent . canonicalInstruction ( ctx );
Location: packages/adk/src/agents/llm-agent.ts:466
canonicalGlobalInstruction()
Resolves the global instruction to a string.
Returns: Promise<[string, boolean]>
const [ globalInst , isDynamic ] = await agent . canonicalGlobalInstruction ( ctx );
Location: packages/adk/src/agents/llm-agent.ts:479
Resolves tools to an array of BaseTool instances, converting functions to FunctionTool.
Optional readonly context
Returns: Promise<BaseTool[]>
const tools = await agent . canonicalTools ( ctx );
Location: packages/adk/src/agents/llm-agent.ts:494
canonicalBeforeModelCallbacks (getter)
Gets the before model callbacks as an array.
Returns: SingleBeforeModelCallback[]
Location: packages/adk/src/agents/llm-agent.ts:514
canonicalAfterModelCallbacks (getter)
Gets the after model callbacks as an array.
Returns: SingleAfterModelCallback[]
Location: packages/adk/src/agents/llm-agent.ts:527
Gets the before tool callbacks as an array.
Returns: SingleBeforeToolCallback[]
Location: packages/adk/src/agents/llm-agent.ts:540
Gets the after tool callbacks as an array.
Returns: SingleAfterToolCallback[]
Location: packages/adk/src/agents/llm-agent.ts:553
Callback Types
BeforeModelCallback
Invoked before calling the LLM.
type SingleBeforeModelCallback = ( args : {
callbackContext : CallbackContext ;
llmRequest : LlmRequest ;
}) => LlmResponse | null | undefined | Promise < LlmResponse | null | undefined >;
type BeforeModelCallback = SingleBeforeModelCallback | SingleBeforeModelCallback [];
Returns:
LlmResponse - Override the LLM call with this response
null or undefined - Continue with LLM call
const agent = new LlmAgent ({
name: "cached_agent" ,
model: "gpt-4" ,
beforeModelCallback : async ({ llmRequest , callbackContext }) => {
// Check cache before calling LLM
const cached = await cache . get ( llmRequest );
if ( cached ) {
return cached ; // Skip LLM call
}
return null ; // Proceed with LLM call
}
});
AfterModelCallback
Invoked after receiving LLM response.
type SingleAfterModelCallback = ( args : {
callbackContext : CallbackContext ;
llmResponse : LlmResponse ;
}) => LlmResponse | null | undefined | Promise < LlmResponse | null | undefined >;
type AfterModelCallback = SingleAfterModelCallback | SingleAfterModelCallback [];
Returns:
LlmResponse - Replace the LLM response with this
null or undefined - Use original response
const agent = new LlmAgent ({
name: "filtered_agent" ,
model: "gpt-4" ,
afterModelCallback : async ({ llmResponse }) => {
// Filter or modify response
const filtered = filterContent ( llmResponse );
return filtered ;
}
});
Invoked before executing a tool.
type SingleBeforeToolCallback = (
tool : BaseTool ,
args : Record < string , any >,
toolContext : ToolContext
) => Record < string , any > | null | undefined | Promise < Record < string , any > | null | undefined >;
type BeforeToolCallback = SingleBeforeToolCallback | SingleBeforeToolCallback [];
Returns:
Record<string, any> - Modified tool arguments
null or undefined - Use original arguments
const agent = new LlmAgent ({
name: "validated_agent" ,
model: "gpt-4" ,
tools: [ apiTool ],
beforeToolCallback : async ( tool , args , ctx ) => {
// Validate or modify tool arguments
console . log ( `Calling ${ tool . name } with:` , args );
if ( tool . name === "api_call" ) {
// Add authentication
return { ... args , apiKey: process . env . API_KEY };
}
return null ;
}
});
Invoked after executing a tool.
type SingleAfterToolCallback = (
tool : BaseTool ,
args : Record < string , any >,
toolContext : ToolContext ,
toolResponse : Record < string , any >
) => Record < string , any > | null | undefined | Promise < Record < string , any > | null | undefined >;
type AfterToolCallback = SingleAfterToolCallback | SingleAfterToolCallback [];
Returns:
Record<string, any> - Modified tool response
null or undefined - Use original response
const agent = new LlmAgent ({
name: "logging_agent" ,
model: "gpt-4" ,
tools: [ searchTool ],
afterToolCallback : async ( tool , args , ctx , response ) => {
// Log or modify tool response
console . log ( ` ${ tool . name } returned:` , response );
// Transform response if needed
if ( tool . name === "search" ) {
return { ... response , timestamp: Date . now () };
}
return null ;
}
});
Dynamic Instructions
Instructions can be dynamic functions that generate content based on context:
import { LlmAgent , type ReadonlyContext } from "@iqai/adk" ;
const agent = new LlmAgent ({
name: "contextual_agent" ,
model: "gpt-4" ,
instruction : async ( ctx : ReadonlyContext ) => {
const userPreferences = ctx . session . state . preferences ;
const timeOfDay = new Date (). getHours ();
return `You are a helpful assistant.
User preferences: ${ JSON . stringify ( userPreferences ) }
Current time: ${ timeOfDay < 12 ? 'morning' : 'afternoon' }
Adjust your responses accordingly.` ;
}
});
Tools can be BaseTool instances or plain functions:
BaseTool Instance
Plain Function (Auto-converted)
import { createTool } from "@iqai/adk" ;
const weatherTool = createTool ({
name: "get_weather" ,
description: "Get current weather for a location" ,
fn : async ( location : string ) => {
const response = await fetch ( `/api/weather?loc= ${ location } ` );
return response . json ();
}
});
const agent = new LlmAgent ({
name: "weather_agent" ,
model: "gpt-4" ,
tools: [ weatherTool ]
});
Output Schema Validation
Use Zod schemas for structured, type-safe outputs:
import { LlmAgent } from "@iqai/adk" ;
import { z } from "zod" ;
const outputSchema = z . object ({
capital: z . string (). describe ( "The capital city" ),
country: z . string (). describe ( "The country name" ),
population: z . number (). optional (). describe ( "Population" ),
funFact: z . string (). describe ( "An interesting fact" )
});
const agent = new LlmAgent ({
name: "geo_agent" ,
model: "gemini-2.5-flash" ,
outputSchema
});
// Output will be validated and parsed
const result = await runner . ask ( "Tell me about Paris" );
// result type: { capital: string; country: string; population?: number; funFact: string }
Location (validation logic): packages/adk/src/agents/llm-agent.ts:619
Multi-Agent Workflows
import { LlmAgent } from "@iqai/adk" ;
const researcher = new LlmAgent ({
name: "researcher" ,
description: "Researches topics and gathers information" ,
model: "gpt-4" ,
tools: [ searchTool , scrapeTool ]
});
const analyst = new LlmAgent ({
name: "analyst" ,
description: "Analyzes data and draws insights" ,
model: "gpt-4"
});
const writer = new LlmAgent ({
name: "writer" ,
description: "Writes polished reports" ,
model: "gpt-4"
});
const coordinator = new LlmAgent ({
name: "coordinator" ,
description: "Coordinates the research workflow" ,
model: "gpt-4" ,
subAgents: [ researcher , analyst , writer ],
instruction: "Coordinate the research team to produce comprehensive reports."
});
Complete Example
import {
LlmAgent ,
createTool ,
MemoryService ,
InMemoryStorageProvider
} from "@iqai/adk" ;
import { z } from "zod" ;
// Define tools
const searchTool = createTool ({
name: "search" ,
description: "Search for information online" ,
fn : async ( query : string ) => {
// Search implementation
return { results: [] };
}
});
// Define output schema
const outputSchema = z . object ({
answer: z . string (),
sources: z . array ( z . string ()),
confidence: z . number (). min ( 0 ). max ( 1 )
});
// Create memory service
const memoryService = new MemoryService ({
storage: new InMemoryStorageProvider ()
});
// Create agent
const agent = new LlmAgent ({
name: "research_assistant" ,
description: "A research assistant that searches and analyzes information" ,
model: "gemini-2.5-flash" ,
instruction: `You are a thorough research assistant.
- Search for relevant information
- Cite your sources
- Provide confidence scores` ,
tools: [ searchTool ],
outputSchema ,
memoryService ,
outputKey: "research_result" ,
beforeModelCallback : async ({ llmRequest }) => {
console . log ( "Calling LLM with" , llmRequest . contents . length , "messages" );
return null ;
},
afterToolCallback : async ( tool , args , ctx , response ) => {
console . log ( ` ${ tool . name } completed:` , response );
return null ;
}
});
// Use with AgentBuilder for easier execution
import { AgentBuilder } from "@iqai/adk" ;
const { runner , session } = await AgentBuilder
. withAgent ( agent )
. build ();
const result = await runner . ask ( "What are the latest AI trends?" );
console . log ( result );
// Type-safe result: { answer: string; sources: string[]; confidence: number }
Type Definitions
type InstructionProvider = (
ctx : ReadonlyContext
) => string | Promise < string >;
type ToolUnion = BaseTool | (( ... args : any []) => any );
interface LlmAgentConfig < T extends BaseLlm = BaseLlm > {
name : string ;
description : string ;
model ?: string | T | LanguageModel ;
instruction ?: string | InstructionProvider ;
globalInstruction ?: string | InstructionProvider ;
tools ?: ToolUnion [];
codeExecutor ?: BaseCodeExecutor ;
planner ?: BasePlanner ;
subAgents ?: BaseAgent [];
disallowTransferToParent ?: boolean ;
disallowTransferToPeers ?: boolean ;
includeContents ?: "default" | "none" ;
outputKey ?: string ;
inputSchema ?: ZodSchema ;
outputSchema ?: ZodSchema ;
memoryService ?: MemoryService ;
sessionService ?: BaseSessionService ;
artifactService ?: BaseArtifactService ;
plugins ?: BasePlugin [];
beforeAgentCallback ?: BeforeAgentCallback ;
afterAgentCallback ?: AfterAgentCallback ;
beforeModelCallback ?: BeforeModelCallback ;
afterModelCallback ?: AfterModelCallback ;
beforeToolCallback ?: BeforeToolCallback ;
afterToolCallback ?: AfterToolCallback ;
// ... other fields
}
See Also