Overview
LlmAgent is the core agent type that interacts with a single language model. It supports tools, sub-agents, callbacks, code execution, planning, and structured output.
Creating an LlmAgent
Using AgentBuilder (Recommended)
import { AgentBuilder } from '@iqai/adk' ;
const { runner } = await AgentBuilder
. create ( 'assistant' )
. withModel ( 'gemini-2.5-flash' )
. withDescription ( 'A helpful assistant' )
. withInstruction ( 'You are a helpful AI assistant' )
. build (); // Creates LlmAgent by default
Direct Construction
apps/examples/src/03-multi-agent-systems/agents/customer-analyzer/agent.ts
import { LlmAgent } from '@iqai/adk' ;
import dedent from 'dedent' ;
export function getCustomerAnalyzerAgent () {
return new LlmAgent ({
name: 'customer_analyzer' ,
description: 'Analyzes customer restaurant orders' ,
instruction: dedent `
Extract order items, dietary restrictions, special preferences, and budget constraints.
Return the extracted information in a clear, structured format.
` ,
outputKey: 'customer_preferences' ,
model: process . env . LLM_MODEL || 'gemini-3-flash-preview' ,
});
}
Configuration
LlmAgentConfig
interface LlmAgentConfig {
// Required
name : string ;
description : string ;
// Model
model ?: string | BaseLlm | LanguageModel ;
// Instructions
instruction ?: string | InstructionProvider ;
globalInstruction ?: string | InstructionProvider ;
// Tools and execution
tools ?: ToolUnion [];
codeExecutor ?: BaseCodeExecutor ;
planner ?: BasePlanner ;
// Sub-agents and hierarchy
subAgents ?: BaseAgent [];
disallowTransferToParent ?: boolean ;
disallowTransferToPeers ?: boolean ;
// State management
outputKey ?: string ;
inputSchema ?: z . ZodSchema ;
outputSchema ?: z . ZodSchema ;
// Services
memoryService ?: MemoryService ;
sessionService ?: BaseSessionService ;
artifactService ?: BaseArtifactService ;
// Callbacks
beforeAgentCallback ?: BeforeAgentCallback ;
afterAgentCallback ?: AfterAgentCallback ;
beforeModelCallback ?: BeforeModelCallback ;
afterModelCallback ?: AfterModelCallback ;
beforeToolCallback ?: BeforeToolCallback ;
afterToolCallback ?: AfterToolCallback ;
// Plugins
plugins ?: BasePlugin [];
// Advanced
includeContents ?: 'default' | 'none' ;
generateContentConfig ?: GenerateContentConfig ;
}
Model Configuration
Model Inheritance
If no model is specified, the agent inherits from its parent:
const parent = new LlmAgent ({
name: 'parent' ,
description: 'Parent agent' ,
model: 'gemini-2.5-flash'
});
const child = new LlmAgent ({
name: 'child' ,
description: 'Child agent' ,
// No model specified - inherits from parent
});
parent . subAgents = [ child ];
Multiple Model Types
// String identifier
const agent1 = new LlmAgent ({
name: 'agent1' ,
description: 'Agent 1' ,
model: 'gpt-4'
});
// BaseLlm instance
import { AnthropicLlm } from '@iqai/adk' ;
const agent2 = new LlmAgent ({
name: 'agent2' ,
description: 'Agent 2' ,
model: new AnthropicLlm ()
});
// AI SDK LanguageModel
import { openai } from '@ai-sdk/openai' ;
const agent3 = new LlmAgent ({
name: 'agent3' ,
description: 'Agent 3' ,
model: openai ( 'gpt-4-turbo' )
});
Instructions
Static Instructions
const agent = new LlmAgent ({
name: 'writer' ,
description: 'Content writer' ,
instruction: `
You are a professional content writer.
Write clear, engaging, and well-structured content.
Follow the user's style preferences and tone.
` ,
model: 'gemini-2.5-flash'
});
Dynamic Instructions
Instructions can be functions that access context:
import { ReadonlyContext } from '@iqai/adk' ;
const agent = new LlmAgent ({
name: 'personalized' ,
description: 'Personalized assistant' ,
instruction : ( ctx : ReadonlyContext ) => {
const userName = ctx . state . get ( 'userName' , 'User' );
const preferences = ctx . state . get ( 'preferences' , {});
return `
You are assisting ${ userName } .
User preferences: ${ JSON . stringify ( preferences ) }
Adapt your responses accordingly.
` ;
},
model: 'gemini-2.5-flash'
});
Global Instructions
Global instructions apply to the entire agent tree:
const root = new LlmAgent ({
name: 'root' ,
description: 'Root agent' ,
globalInstruction: `
Always be respectful and professional.
If unsure, ask for clarification.
Never share sensitive information.
` ,
model: 'gemini-2.5-flash' ,
subAgents: [ child1 , child2 ] // Global instruction applies to all
});
Only the globalInstruction in the root agent takes effect. Global instructions in sub-agents are ignored.
import { createTool } from '@iqai/adk' ;
import { z } from 'zod' ;
const weatherTool = createTool ({
name: 'get_weather' ,
description: 'Get current weather for a location' ,
schema: z . object ({
location: z . string (). describe ( 'City name' )
}),
fn : async ({ location }) => {
// Fetch weather data
return { temp: 72 , condition: 'sunny' };
}
});
const agent = new LlmAgent ({
name: 'weather-agent' ,
description: 'Weather assistant' ,
tools: [ weatherTool ],
model: 'gemini-2.5-flash'
});
Regular functions are automatically converted to tools:
function getCurrentTime () : string {
return new Date (). toISOString ();
}
function add ( a : number , b : number ) : number {
return a + b ;
}
const agent = new LlmAgent ({
name: 'calculator' ,
description: 'Calculator agent' ,
tools: [ getCurrentTime , add ],
model: 'gemini-2.5-flash'
});
Tool Context
Tools receive a ToolContext with access to state and services:
apps/examples/src/02-tools-and-state/agents/tools.ts
import { createTool } from '@iqai/adk' ;
import * as z from 'zod' ;
export const addItemTool = createTool ({
name: 'add_item' ,
description: 'Add an item to the shopping cart' ,
schema: z . object ({
item: z . string (). describe ( 'Item name' ),
quantity: z . number (). default ( 1 ). describe ( 'Quantity to add' ),
price: z . number (). describe ( 'Price per item' ),
}),
fn : ({ item , quantity , price }, context ) => {
const cart : { item : string ; quantity : number ; price : number }[] =
context . state . get ( 'cart' , []);
const existingItemIndex = cart . findIndex (
( cartItem ) => cartItem . item === item ,
);
let updatedCart : { item : string ; quantity : number ; price : number }[];
if ( existingItemIndex > - 1 ) {
updatedCart = cart . map (( cartItem , index ) => {
if ( index === existingItemIndex ) {
return { ... cartItem , quantity: cartItem . quantity + quantity };
}
return cartItem ;
});
} else {
updatedCart = [ ... cart , { item , quantity , price }];
}
context . state . set ( 'cart' , updatedCart );
context . state . set ( 'cartCount' , updatedCart . length );
const total = updatedCart . reduce (
( sum , cartItem ) => sum + cartItem . quantity * cartItem . price ,
0 ,
);
return {
success: true ,
item ,
quantity ,
cartTotal: total ,
message: `Added ${ quantity } x ${ item } to cart` ,
};
},
});
Sub-Agents and Delegation
Adding Sub-Agents
apps/examples/src/03-multi-agent-systems/agents/agent.ts
import { AgentBuilder } from '@iqai/adk' ;
import { getCustomerAnalyzerAgent } from './customer-analyzer/agent' ;
import { getMenuValidatorAgent } from './menu-validator/agent' ;
import { getOrderFinalizerAgent } from './order-finalizer/agent' ;
export function getRootAgent () {
const customerAnalyzer = getCustomerAnalyzerAgent ();
const menuValidator = getMenuValidatorAgent ();
const orderFinalizer = getOrderFinalizerAgent ();
const initialState = {
customer_preferences: '' ,
menu_validation: '' ,
};
return AgentBuilder . create ( 'restaurant_order_system' )
. withModel ( process . env . LLM_MODEL || 'gemini-3-flash-preview' )
. withSubAgents ([ customerAnalyzer , menuValidator , orderFinalizer ])
. withQuickSession ({ state: initialState })
. build ();
}
Transfer Control
By default, LLMs can transfer control between agents:
const specialist = new LlmAgent ({
name: 'specialist' ,
description: 'Handles complex technical questions' ,
model: 'gpt-4'
});
const general = new LlmAgent ({
name: 'general' ,
description: 'General assistant' ,
model: 'gpt-3.5-turbo' ,
subAgents: [ specialist ],
// LLM can transfer to specialist automatically
});
Disabling Transfers
const agent = new LlmAgent ({
name: 'isolated' ,
description: 'Isolated agent' ,
model: 'gemini-2.5-flash' ,
disallowTransferToParent: true , // Cannot transfer to parent
disallowTransferToPeers: true , // Cannot transfer to siblings
});
Schemas
Output Schema
Enforce structured output:
import { z } from 'zod' ;
const agent = new LlmAgent ({
name: 'analyzer' ,
description: 'Data analyzer' ,
model: 'gemini-2.5-flash' ,
outputSchema: z . object ({
sentiment: z . enum ([ 'positive' , 'negative' , 'neutral' ]),
score: z . number (). min ( 0 ). max ( 1 ),
topics: z . array ( z . string ()),
summary: z . string ()
})
});
When outputSchema is set, the agent focuses on structured output. Tools and transfers may still be used, but the final response will be validated against the schema.
Validate input when used as a tool:
const agent = new LlmAgent ({
name: 'processor' ,
description: 'Processes user data' ,
model: 'gemini-2.5-flash' ,
inputSchema: z . object ({
userId: z . string (),
action: z . string ()
})
});
// When used as a sub-agent, inputs are validated
State Management
Output Key
Automatically save agent output to session state:
const agent = new LlmAgent ({
name: 'extractor' ,
description: 'Extracts information' ,
model: 'gemini-2.5-flash' ,
outputKey: 'extracted_data'
});
// After agent runs, session.state.extracted_data contains the output
Callbacks
Agent Lifecycle Callbacks
const agent = new LlmAgent ({
name: 'tracked' ,
description: 'Tracked agent' ,
model: 'gemini-2.5-flash' ,
beforeAgentCallback : async ( context ) => {
console . log ( 'Agent starting:' , context . agent . name );
console . log ( 'User message:' , context . userContent );
// Return Content to skip agent execution
// Return undefined to continue
},
afterAgentCallback : async ( context ) => {
console . log ( 'Agent finished' );
// Return Content to replace response
// Return undefined to keep original
}
});
Model Callbacks
const agent = new LlmAgent ({
name: 'logged' ,
description: 'Logged agent' ,
model: 'gemini-2.5-flash' ,
beforeModelCallback : async ({ llmRequest , callbackContext }) => {
console . log ( 'Calling model with:' , llmRequest );
// Return LlmResponse to skip model call
// Return null/undefined to proceed
},
afterModelCallback : async ({ llmResponse , callbackContext }) => {
console . log ( 'Model responded:' , llmResponse );
// Return LlmResponse to replace response
// Return null/undefined to keep original
}
});
const agent = new LlmAgent ({
name: 'monitored' ,
description: 'Monitored tools' ,
model: 'gemini-2.5-flash' ,
tools: [ searchTool ],
beforeToolCallback : async ( tool , args , context ) => {
console . log ( `Calling tool ${ tool . name } :` , args );
// Return modified args or null
},
afterToolCallback : async ( tool , args , context , result ) => {
console . log ( `Tool ${ tool . name } returned:` , result );
// Return modified result or null
}
});
Callback Arrays
Callbacks can be arrays - executed in order until one returns a value:
const agent = new LlmAgent ({
name: 'multi-callback' ,
description: 'Multiple callbacks' ,
model: 'gemini-2.5-flash' ,
beforeModelCallback: [
async ({ llmRequest }) => {
// First callback
console . log ( 'First callback' );
return null ; // Continue to next
},
async ({ llmRequest }) => {
// Second callback
console . log ( 'Second callback' );
return null ; // Continue to next
},
async ({ llmRequest }) => {
// Third callback
console . log ( 'Third callback' );
// Return response to stop chain
}
]
});
Plugins
Adding Plugins
import { BasePlugin } from '@iqai/adk' ;
class LoggingPlugin extends BasePlugin {
name = 'logging' ;
async onBeforeModelCall ( request ) {
console . log ( 'Model call:' , request );
}
async onAfterModelCall ( response ) {
console . log ( 'Model response:' , response );
}
}
const agent = new LlmAgent ({
name: 'plugged' ,
description: 'Agent with plugins' ,
model: 'gemini-2.5-flash' ,
plugins: [ new LoggingPlugin ()]
});
Advanced Features
Code Execution
import { LocalCodeExecutor } from '@iqai/adk' ;
const agent = new LlmAgent ({
name: 'code-agent' ,
description: 'Can execute code' ,
model: 'gemini-2.5-flash' ,
codeExecutor: new LocalCodeExecutor ()
});
Planning
import { BasePlanner } from '@iqai/adk' ;
const agent = new LlmAgent ({
name: 'planner' ,
description: 'Plans before executing' ,
model: 'gemini-2.5-flash' ,
planner: new BasePlanner ()
});
Content Inclusion
const agent = new LlmAgent ({
name: 'minimal' ,
description: 'Minimal context' ,
model: 'gemini-2.5-flash' ,
includeContents: 'none' // Don't include previous contents
});
Generate Content Config
const agent = new LlmAgent ({
name: 'configured' ,
description: 'Custom configuration' ,
model: 'gemini-2.5-flash' ,
generateContentConfig: {
temperature: 0.7 ,
topK: 40 ,
topP: 0.95 ,
maxOutputTokens: 1024
}
});
Type Safety
Canonical Model
Access the resolved model:
const agent = new LlmAgent ({
name: 'agent' ,
description: 'Agent' ,
model: 'gemini-2.5-flash'
});
const model = agent . canonicalModel ; // Returns BaseLlm instance
Resolve tools with context:
const tools = await agent . canonicalTools ( context );
// Returns BaseTool[] with functions converted to FunctionTool
Canonical Instructions
Resolve dynamic instructions:
const [ instruction , isDynamic ] = await agent . canonicalInstruction ( context );
console . log ( instruction ); // Resolved instruction string
console . log ( isDynamic ); // true if instruction was a function
Execution
Via Runner
const { runner } = await AgentBuilder
. withAgent ( agent )
. build ();
const response = await runner . ask ( 'Hello' );
Via Context
Direct execution (advanced):
import { InvocationContext } from '@iqai/adk' ;
const context = new InvocationContext ({
agent ,
session ,
invocationId: 'inv-123' ,
userContent: { parts: [{ text: 'Hello' }] }
});
for await ( const event of agent . runAsync ( context )) {
console . log ( 'Event:' , event );
}
Best Practices
Use AgentBuilder for Construction
AgentBuilder provides better defaults and session management:
// Good
const { runner } = await AgentBuilder
. create ( 'agent' )
. withModel ( 'gemini-2.5-flash' )
. build ();
// Less convenient
const agent = new LlmAgent ({
name: 'agent' ,
description: '' ,
model: 'gemini-2.5-flash'
});
Provide Clear Descriptions
Descriptions help with agent transfers:
// Good
description : 'Analyzes customer sentiment from reviews'
// Less helpful
description : 'Analyzer'
Use Output Keys for State
Combine outputKey with sub-agents for data flow:
const extractor = new LlmAgent ({
name: 'extractor' ,
description: 'Extracts data' ,
outputKey: 'extracted' ,
model: 'gemini-2.5-flash'
});
const processor = new LlmAgent ({
name: 'processor' ,
description: 'Processes data' ,
instruction: 'Process this data: {extracted}' ,
model: 'gemini-2.5-flash'
});
When using outputSchema, understand that tools can still be called:
const agent = new LlmAgent ({
name: 'agent' ,
description: 'Structured agent with tools' ,
model: 'gemini-2.5-flash' ,
tools: [ searchTool ], // Can still use tools
outputSchema: z . object ({ result: z . string () }) // Final response validated
});
Next Steps
Workflow Agents Build multi-agent workflows
LoopAgent Iterative agent execution
Tools Add capabilities with tools
Callbacks Intercept agent lifecycle