Overview
AgentBuilder provides a chainable, fluent interface for building different types of agents with tools, instructions, callbacks, and multi-agent workflows. It handles session management automatically using in-memory storage by default.
Basic Usage
Quick Start
The simplest way to create and use an agent:
import { AgentBuilder } from '@iqai/adk' ;
// Simple one-liner
const response = await AgentBuilder
. withModel ( 'gemini-2.5-flash' )
. ask ( 'What is the capital of France?' );
Building with Configuration
const { runner , agent , session } = await AgentBuilder
. create ( 'research-agent' )
. withModel ( 'gemini-2.5-flash' )
. withDescription ( 'A research assistant agent' )
. withInstruction ( 'You are a helpful research assistant' )
. build ();
const response = await runner . ask ( 'Tell me about quantum computing' );
Core Methods
Agent Creation
// Start with create()
const builder = AgentBuilder . create ( 'my-agent' );
// Or start with a model
const builder = AgentBuilder . withModel ( 'gpt-4' );
// Or wrap an existing agent
const builder = AgentBuilder . withAgent ( existingAgent );
Model Configuration
// String model identifier
. withModel ( 'gemini-2.5-flash' )
. withModel ( 'gpt-4' )
. withModel ( 'claude-3-5-sonnet-20241022' )
// BaseLlm instance
import { AnthropicLlm } from '@iqai/adk' ;
. withModel ( new AnthropicLlm ())
// AI SDK LanguageModel
import { openai } from '@ai-sdk/openai' ;
. withModel ( openai ( 'gpt-4-turbo' ))
Instructions and Description
AgentBuilder . create ( 'agent' )
. withModel ( 'gemini-2.5-flash' )
. withDescription ( 'Analyzes customer feedback' )
. withInstruction ( `
You are a customer feedback analyzer.
Extract sentiment, key topics, and actionable insights.
Be concise and professional.
` )
. build ();
import { createTool } from '@iqai/adk' ;
import { z } from 'zod' ;
const searchTool = createTool ({
name: 'search' ,
description: 'Search for information' ,
schema: z . object ({
query: z . string (). describe ( 'Search query' )
}),
fn : async ({ query }) => {
// Search implementation
return { results: [] };
}
});
const { runner } = await AgentBuilder
. create ( 'search-agent' )
. withModel ( 'gemini-2.5-flash' )
. withTools ( searchTool )
. build ();
Schema Support
Output Schema
Enforce structured output from your agent:
apps/examples/src/01-getting-started/agents/agent.ts
import { AgentBuilder } from '@iqai/adk' ;
import z from 'zod/v4' ; // Ensure its v4!
export function getRootAgent () {
const outputSchema = z . object ({
capital: z . string (). describe ( 'The capital city name' ),
country: z . string (). describe ( 'The country name' ),
population: z
. number ()
. optional ()
. describe ( 'Population of the capital city' ),
funFact: z . string (). describe ( 'An interesting fact about the city' ),
});
return AgentBuilder . withModel (
process . env . LLM_MODEL || 'gemini-3-flash-preview' ,
)
. withOutputSchema ( outputSchema )
. build ();
}
Output schemas are automatically validated and parsed. The runner.ask() method will return the typed object instead of a string.
Validate input when using agents as tools:
const inputSchema = z . object ({
userId: z . string (),
action: z . enum ([ 'create' , 'update' , 'delete' ])
});
const { agent } = await AgentBuilder
. create ( 'action-agent' )
. withModel ( 'gemini-2.5-flash' )
. withInputSchema ( inputSchema )
. build ();
Session Management
Automatic Sessions
By default, sessions are created automatically with in-memory storage:
const { runner , session } = await AgentBuilder
. withModel ( 'gemini-2.5-flash' )
. build ();
console . log ( session . id ); // auto-generated
console . log ( session . userId ); // auto-generated
Custom Sessions
import { InMemorySessionService } from '@iqai/adk' ;
const { runner } = await AgentBuilder
. create ( 'assistant' )
. withModel ( 'gemini-2.5-flash' )
. withQuickSession ({
userId: 'user-123' ,
appName: 'my-app' ,
state: { theme: 'dark' , language: 'en' }
})
. build ();
Persistent Sessions
import { DatabaseSessionService } from '@iqai/adk' ;
const sessionService = new DatabaseSessionService ({
connectionString: process . env . DATABASE_URL
});
const { runner } = await AgentBuilder
. create ( 'assistant' )
. withModel ( 'gemini-2.5-flash' )
. withSessionService ( sessionService , {
userId: 'user-123' ,
appName: 'my-app'
})
. build ();
Reusing Existing Sessions
const sessionService = new InMemorySessionService ();
// Create initial session
const session = await sessionService . createSession (
'my-app' ,
'user-123' ,
{ count: 0 }
);
// Use existing session
const { runner } = await AgentBuilder
. create ( 'counter' )
. withModel ( 'gemini-2.5-flash' )
. withSessionService ( sessionService )
. withSession ( session )
. build ();
You must call withSessionService() before withSession(), or use withQuickSession() for in-memory sessions.
State Management
Initial State
apps/examples/src/02-tools-and-state/agents/agent.ts
import { AgentBuilder } from '@iqai/adk' ;
import dedent from 'dedent' ;
import { addItemTool , viewCartTool } from './tools' ;
export function getRootAgent () {
const initialState = {
cart: [],
cartCount: 0 ,
};
return AgentBuilder . create ( 'shopping_cart_agent' )
. withModel ( process . env . LLM_MODEL || 'gemini-3-flash-preview' )
. withInstruction (
dedent `
You are a shopping cart assistant. Help users manage their cart.
Current cart state:
- Items in cart: {cartCount}
- Cart contents: {cart}
You can add items and view the cart. Always be helpful with pricing and quantities.
` ,
)
. withTools ( addItemTool , viewCartTool )
. withQuickSession ({ state: initialState })
. build ();
}
Output Keys
Store agent output in session state:
const analyzer = new LlmAgent ({
name: 'customer_analyzer' ,
description: 'Analyzes customer orders' ,
instruction: 'Extract order items and preferences' ,
outputKey: 'customer_preferences' , // Saves output to state
model: 'gemini-3-flash-preview'
});
Plugins
Adding Plugins
const { runner } = await AgentBuilder
. create ( 'monitored-agent' )
. withModel ( 'gemini-2.5-flash' )
. withPlugins (
new LoggingPlugin (),
new MetricsPlugin (),
new RateLimitPlugin ()
)
. build ();
Fallback Models
Automatically fall back to alternative models on rate limits:
const response = await AgentBuilder
. withModel ( 'gpt-4' )
. withFallbackModels ( 'gpt-3.5-turbo' , 'gemini-2.0-flash' )
. ask ( 'Hello' );
Fallback models are automatically wrapped in a ModelFallbackPlugin and tried in order when the primary model returns a 429 error.
Callbacks
Agent Lifecycle Callbacks
const { runner } = await AgentBuilder
. create ( 'tracked-agent' )
. withModel ( 'gemini-2.5-flash' )
. withBeforeAgentCallback ( async ( context ) => {
console . log ( 'Agent starting:' , context . agent . name );
// Return Content to skip agent execution
// Return undefined to continue
})
. withAfterAgentCallback ( async ( context ) => {
console . log ( 'Agent finished:' , context . agent . name );
// Return Content to replace agent response
// Return undefined to use original response
})
. build ();
Model Callbacks
const { runner } = await AgentBuilder
. create ( 'logged-agent' )
. withModel ( 'gemini-2.5-flash' )
. withBeforeModelCallback ( async ({ llmRequest }) => {
console . log ( 'Model request:' , llmRequest );
// Return LlmResponse to skip model call
// Return null/undefined to continue
})
. withAfterModelCallback ( async ({ llmResponse }) => {
console . log ( 'Model response:' , llmResponse );
// Return LlmResponse to replace response
// Return null/undefined to use original
})
. build ();
const { runner } = await AgentBuilder
. create ( 'monitored-tools' )
. withModel ( 'gemini-2.5-flash' )
. withTools ( searchTool )
. withBeforeToolCallback ( async ( tool , args , context ) => {
console . log ( 'Tool call:' , tool . name , args );
// Return args to modify
// Return null/undefined to use original
})
. withAfterToolCallback ( async ( tool , args , context , result ) => {
console . log ( 'Tool result:' , result );
// Return result to modify
// Return null/undefined to use original
})
. build ();
Advanced Configuration
Code Execution
import { LocalCodeExecutor } from '@iqai/adk' ;
const { runner } = await AgentBuilder
. create ( 'code-agent' )
. withModel ( 'gemini-2.5-flash' )
. withCodeExecutor ( new LocalCodeExecutor ())
. build ();
Planning
import { BasePlanner } from '@iqai/adk' ;
const { runner } = await AgentBuilder
. create ( 'planning-agent' )
. withModel ( 'gemini-2.5-flash' )
. withPlanner ( new BasePlanner ())
. build ();
Memory Service
import { MemoryService } from '@iqai/adk' ;
const memoryService = new MemoryService ({
// Memory configuration
});
const { runner } = await AgentBuilder
. create ( 'memory-agent' )
. withModel ( 'gemini-2.5-flash' )
. withMemory ( memoryService )
. build ();
Events Compaction
Automatically compact event history:
const { runner } = await AgentBuilder
. create ( 'assistant' )
. withModel ( 'gemini-2.5-flash' )
. withEventsCompaction ({
compactionInterval: 10 , // Compact every 10 invocations
overlapSize: 2 , // Include 2 prior invocations
})
. build ();
Context Caching
import { ContextCacheConfig } from '@iqai/adk' ;
const { runner } = await AgentBuilder
. create ( 'cached-agent' )
. withModel ( 'gemini-2.5-flash' )
. withContextCacheConfig ( new ContextCacheConfig ({
ttl: 3600 // Cache for 1 hour
}))
. build ();
Multi-Agent Workflows
See Workflow Agents for details on:
Sequential execution with .asSequential()
Parallel execution with .asParallel()
Loop execution with .asLoop()
Graph-based workflows with .asLangGraph()
Type Safety
Typed Output
type CapitalInfo = {
capital : string ;
country : string ;
population ?: number ;
funFact : string ;
};
const outputSchema = z . object ({
capital: z . string (),
country: z . string (),
population: z . number (). optional (),
funFact: z . string ()
});
const { runner } = await AgentBuilder
. withModel ( 'gemini-2.5-flash' )
. withOutputSchema ( outputSchema )
. buildWithSchema < CapitalInfo >();
// Type-safe response
const result : CapitalInfo = await runner . ask ( 'Tell me about Paris' );
console . log ( result . capital ); // TypeScript knows this is a string
EnhancedRunner API
The built agent includes an EnhancedRunner with simplified methods:
ask()
Send a message and get a response:
// String message
const response = await runner . ask ( 'Hello' );
// Full message with parts
const response = await runner . ask ({
parts: [
{ text: 'Analyze this image' },
{ image: 'base64-encoded-image' }
]
});
runAsync()
Stream events from agent execution:
for await ( const event of runner . runAsync ({
userId: 'user-123' ,
sessionId: session . id ,
newMessage: { parts: [{ text: 'Hello' }] }
})) {
console . log ( 'Event:' , event );
}
Session Methods
// Get current session
const session = runner . getSession ();
// Update session
const newSession = await sessionService . createSession (
'app' ,
'user' ,
{ count: 5 }
);
runner . setSession ( newSession );
Method Reference
Method Description Returns create(name)Create builder with agent name AgentBuilderwithModel(model)Set the LLM model thiswithDescription(desc)Set agent description thiswithInstruction(inst)Set system instruction thiswithTools(...tools)Add tools to agent thiswithInputSchema(schema)Set input validation schema thiswithOutputSchema(schema)Set output schema AgentBuilderWithSchemawithOutputKey(key)Set state output key thiswithPlanner(planner)Set planning strategy thiswithCodeExecutor(executor)Set code execution environment thiswithPlugins(...plugins)Add plugins thiswithFallbackModels(...models)Configure fallback models thiswithSubAgents(agents)Add sub-agents thiswithSessionService(service, opts)Configure session management thiswithSession(session)Use existing session thiswithQuickSession(opts)Use in-memory session thiswithMemory(service)Configure memory service thiswithArtifactService(service)Configure artifact storage thiswithRunConfig(config)Set runtime configuration thiswithEventsCompaction(config)Configure event compaction thiswithContextCacheConfig(config)Configure context caching thiswithBeforeAgentCallback(cb)Set before-agent callback thiswithAfterAgentCallback(cb)Set after-agent callback thiswithBeforeModelCallback(cb)Set before-model callback thiswithAfterModelCallback(cb)Set after-model callback thiswithBeforeToolCallback(cb)Set before-tool callback thiswithAfterToolCallback(cb)Set after-tool callback thisasSequential(agents)Build as sequential agent AgentBuilder<TOut, true>asParallel(agents)Build as parallel agent AgentBuilder<TOut, true>asLoop(agents, max)Build as loop agent thisasLangGraph(nodes, root)Build as graph agent thisbuild()Build agent and runner Promise<BuiltAgent>buildWithSchema<T>()Build with type inference Promise<BuiltAgent<T>>ask(message)Build and execute immediately Promise<response>
Next Steps
LlmAgent Learn about single LLM-based agents
Workflow Agents Build multi-agent systems
Tools Add capabilities with tools
Sessions Manage conversation state