Overview
Workflow agents coordinate multiple sub-agents to solve complex tasks. ADK provides three main workflow patterns:
SequentialAgent - Execute agents one after another
ParallelAgent - Execute agents concurrently in isolation
LangGraphAgent - Graph-based workflows with conditional branching
See also LoopAgent for iterative execution.
SequentialAgent
Execute sub-agents in sequence, passing state between them.
Basic Usage
import { AgentBuilder , LlmAgent } from '@iqai/adk' ;
const extractor = new LlmAgent ({
name: 'extractor' ,
description: 'Extracts key information' ,
instruction: 'Extract the main points from the user message' ,
outputKey: 'extracted_points' ,
model: 'gemini-2.5-flash'
});
const summarizer = new LlmAgent ({
name: 'summarizer' ,
description: 'Creates summaries' ,
instruction: 'Summarize these points: {extracted_points}' ,
model: 'gemini-2.5-flash'
});
const { runner } = await AgentBuilder
. create ( 'sequential-workflow' )
. asSequential ([ extractor , summarizer ])
. build ();
const response = await runner . ask ( 'Analyze this long article...' );
Using AgentBuilder
const { runner } = await AgentBuilder
. create ( 'workflow' )
. asSequential ([
agent1 ,
agent2 ,
agent3
])
. build ();
Direct Construction
import { SequentialAgent } from '@iqai/adk' ;
const workflow = new SequentialAgent ({
name: 'processing-pipeline' ,
description: 'Multi-stage processing pipeline' ,
subAgents: [
validateAgent ,
processAgent ,
finalizeAgent
]
});
State Flow
Sequential agents pass state between sub-agents:
const step1 = new LlmAgent ({
name: 'step1' ,
description: 'First step' ,
instruction: 'Extract customer info' ,
outputKey: 'customer_info' , // Saved to state
model: 'gemini-2.5-flash'
});
const step2 = new LlmAgent ({
name: 'step2' ,
description: 'Second step' ,
instruction: 'Process order for: {customer_info}' , // Reads from state
outputKey: 'order_result' ,
model: 'gemini-2.5-flash'
});
const step3 = new LlmAgent ({
name: 'step3' ,
description: 'Third step' ,
instruction: 'Confirm order: {order_result}' ,
model: 'gemini-2.5-flash'
});
const { runner } = await AgentBuilder
. create ( 'order-workflow' )
. asSequential ([ step1 , step2 , step3 ])
. withQuickSession ({ state: {} })
. build ();
Live Mode
In live (audio/video) mode, sequential agents add a task_completed() function to each sub-agent:
const { runner } = await AgentBuilder
. create ( 'live-sequential' )
. asSequential ([ agent1 , agent2 , agent3 ])
. build ();
// In live mode, each agent can call task_completed() to move to next agent
// The model automatically signals completion when the task is done
The task_completed() function is automatically added to sub-agents in live mode. The LLM calls this function to signal task completion and move to the next agent.
ParallelAgent
Execute sub-agents concurrently in isolated branches.
Basic Usage
import { AgentBuilder , LlmAgent } from '@iqai/adk' ;
const technicalReview = new LlmAgent ({
name: 'technical_reviewer' ,
description: 'Reviews technical accuracy' ,
instruction: 'Review the technical accuracy of the content' ,
model: 'gpt-4'
});
const styleReview = new LlmAgent ({
name: 'style_reviewer' ,
description: 'Reviews writing style' ,
instruction: 'Review the writing style and clarity' ,
model: 'gpt-4'
});
const grammarReview = new LlmAgent ({
name: 'grammar_reviewer' ,
description: 'Reviews grammar' ,
instruction: 'Review grammar and punctuation' ,
model: 'gemini-2.5-flash'
});
const { runner } = await AgentBuilder
. create ( 'parallel-review' )
. asParallel ([ technicalReview , styleReview , grammarReview ])
. build ();
// All three agents run concurrently
const results = await runner . ask ( 'Review this article: ...' );
Using AgentBuilder
const { runner } = await AgentBuilder
. create ( 'parallel-workflow' )
. asParallel ([
agent1 ,
agent2 ,
agent3
])
. build ();
Direct Construction
import { ParallelAgent } from '@iqai/adk' ;
const workflow = new ParallelAgent ({
name: 'multi-perspective' ,
description: 'Multiple parallel perspectives' ,
subAgents: [
optimisticAgent ,
pessimisticAgent ,
neutralAgent
]
});
Isolated Execution
Each sub-agent runs in an isolated branch:
import { ParallelAgent } from '@iqai/adk' ;
const parallel = new ParallelAgent ({
name: 'isolated-agents' ,
description: 'Run agents in isolation' ,
subAgents: [
agent1 , // Branch: isolated-agents.agent1
agent2 , // Branch: isolated-agents.agent2
agent3 // Branch: isolated-agents.agent3
]
});
// Each agent sees the same initial state
// Changes to state are isolated per branch
Multi-Agent Response
Parallel agents return structured responses:
type MultiAgentResponse = Array <{
agent : string ;
response : string ;
}>;
const { runner } = await AgentBuilder
. create ( 'parallel' )
. asParallel ([ agent1 , agent2 , agent3 ])
. build ();
const results = await runner . ask ( 'Analyze this' );
// [
// { agent: 'agent1', response: '...' },
// { agent: 'agent2', response: '...' },
// { agent: 'agent3', response: '...' }
// ]
Use Cases
Multiple Perspectives Get different viewpoints on the same input
Algorithm Comparison Run different algorithms simultaneously
Ensemble Methods Generate multiple responses for evaluation
A/B Testing Compare different approaches in parallel
LangGraphAgent
Graph-based workflows with conditional branching and loops.
Basic Usage
import { AgentBuilder , LlmAgent , type LangGraphNode } from '@iqai/adk' ;
const classifier = new LlmAgent ({
name: 'classifier' ,
description: 'Classifies input' ,
instruction: 'Classify this as technical or general' ,
outputKey: 'classification' ,
model: 'gemini-2.5-flash'
});
const technicalHandler = new LlmAgent ({
name: 'technical' ,
description: 'Handles technical queries' ,
instruction: 'Provide technical response' ,
model: 'gpt-4'
});
const generalHandler = new LlmAgent ({
name: 'general' ,
description: 'Handles general queries' ,
instruction: 'Provide general response' ,
model: 'gemini-2.5-flash'
});
const nodes : LangGraphNode [] = [
{
name: 'classify' ,
agent: classifier ,
targets: [ 'technical' , 'general' ]
},
{
name: 'technical' ,
agent: technicalHandler ,
condition : async ( lastEvent , context ) => {
const classification = context . state . get ( 'classification' , '' );
return classification . includes ( 'technical' );
}
},
{
name: 'general' ,
agent: generalHandler ,
condition : async ( lastEvent , context ) => {
const classification = context . state . get ( 'classification' , '' );
return classification . includes ( 'general' );
}
}
];
const { runner } = await AgentBuilder
. create ( 'conditional-workflow' )
. asLangGraph ( nodes , 'classify' )
. build ();
Node Configuration
interface LangGraphNode {
// Required
name : string ; // Unique node identifier
agent : BaseAgent ; // Agent to execute
// Optional
targets ?: string []; // Next nodes to consider
condition ?: ( // Condition to execute this node
lastEvent : Event ,
context : InvocationContext
) => boolean | Promise < boolean >;
}
Using AgentBuilder
const nodes : LangGraphNode [] = [
{ name: 'node1' , agent: agent1 , targets: [ 'node2' , 'node3' ] },
{ name: 'node2' , agent: agent2 , condition : async ( event , ctx ) => { ... } },
{ name: 'node3' , agent: agent3 }
];
const { runner } = await AgentBuilder
. create ( 'graph-workflow' )
. asLangGraph ( nodes , 'node1' ) // Start at node1
. build ();
Direct Construction
import { LangGraphAgent , type LangGraphNode } from '@iqai/adk' ;
const nodes : LangGraphNode [] = [
// Define nodes...
];
const workflow = new LangGraphAgent ({
name: 'graph-workflow' ,
description: 'Graph-based workflow' ,
nodes: nodes ,
rootNode: 'start' ,
maxSteps: 50 // Prevent infinite loops
});
Conditional Branching
const nodes : LangGraphNode [] = [
{
name: 'validate' ,
agent: validatorAgent ,
targets: [ 'process' , 'error' ]
},
{
name: 'process' ,
agent: processorAgent ,
condition : async ( lastEvent , context ) => {
const isValid = context . state . get ( 'isValid' , false );
return isValid === true ;
},
targets: [ 'finalize' ]
},
{
name: 'error' ,
agent: errorAgent ,
condition : async ( lastEvent , context ) => {
const isValid = context . state . get ( 'isValid' , false );
return isValid === false ;
}
},
{
name: 'finalize' ,
agent: finalizerAgent
}
];
Loops and Iteration
const nodes : LangGraphNode [] = [
{
name: 'process' ,
agent: processorAgent ,
targets: [ 'check' ]
},
{
name: 'check' ,
agent: checkerAgent ,
targets: [ 'process' , 'complete' ] // Can loop back
},
{
name: 'complete' ,
agent: completerAgent ,
condition : async ( lastEvent , context ) => {
const iterations = context . state . get ( 'iterations' , 0 );
return iterations >= 3 ; // Exit after 3 iterations
}
}
];
const workflow = new LangGraphAgent ({
name: 'iterative-workflow' ,
description: 'Workflow with loops' ,
nodes ,
rootNode: 'process' ,
maxSteps: 100 // Prevent infinite loops
});
Max Steps Protection
const workflow = new LangGraphAgent ({
name: 'workflow' ,
description: 'Workflow' ,
nodes ,
rootNode: 'start' ,
maxSteps: 50 // Stop after 50 steps
});
// Update max steps
workflow . setMaxSteps ( 100 );
const currentMax = workflow . getMaxSteps ();
Execution Results
const { agent } = await AgentBuilder
. create ( 'graph' )
. asLangGraph ( nodes , 'start' )
. build ();
// Cast to LangGraphAgent for additional methods
import { LangGraphAgent } from '@iqai/adk' ;
const graphAgent = agent as LangGraphAgent ;
// Get execution results
const results = graphAgent . getExecutionResults ();
// [
// { node: 'node1', events: [...] },
// { node: 'node2', events: [...] }
// ]
// Clear history
graphAgent . clearExecutionHistory ();
// Get all nodes
const allNodes = graphAgent . getNodes ();
// Get specific node
const node = graphAgent . getNode ( 'node1' );
// Get root node
const root = graphAgent . getRootNodeName ();
Multi-Agent Response Type
When using asSequential() or asParallel(), the return type changes:
import { MultiAgentResponse } from '@iqai/adk' ;
const { runner } = await AgentBuilder
. create ( 'multi' )
. asParallel ([ agent1 , agent2 , agent3 ])
. build ();
const results : MultiAgentResponse = await runner . ask ( 'Query' );
// Type: Array<{ agent: string; response: string }>
for ( const { agent , response } of results ) {
console . log ( ` ${ agent } : ${ response } ` );
}
Schema Restrictions
Output schemas cannot be applied to SequentialAgent or ParallelAgent. Define schemas on individual sub-agents instead: // This will throw an error
const { runner } = await AgentBuilder
. create ( 'workflow' )
. asSequential ([ agent1 , agent2 ])
. withOutputSchema ( schema ) // ERROR!
. build ();
// Do this instead
const agent1 = new LlmAgent ({
name: 'agent1' ,
description: 'Agent 1' ,
outputSchema: schema1 ,
model: 'gemini-2.5-flash'
});
const agent2 = new LlmAgent ({
name: 'agent2' ,
description: 'Agent 2' ,
outputSchema: schema2 ,
model: 'gemini-2.5-flash'
});
const { runner } = await AgentBuilder
. create ( 'workflow' )
. asSequential ([ agent1 , agent2 ])
. build ();
Comparison
Feature Sequential Parallel LangGraph Execution One after another Concurrent Conditional graph State Sharing Passes between agents Isolated branches Shared state Branching Linear No branching Conditional Loops No No Yes Use Case Multi-step pipelines Multiple perspectives Complex workflows Response Type Multi-agent array Multi-agent array String or schema
Best Practices
Sequential : Multi-step processes where each step depends on the previous
Parallel : Independent tasks that can run concurrently
LangGraph : Complex workflows with conditional logic
Loop : Iterative refinement (see LoopAgent )
Use Output Keys for State Flow
In sequential workflows, use outputKey to pass data:
const step1 = new LlmAgent ({
name: 'step1' ,
description: 'First step' ,
outputKey: 'step1_result' ,
model: 'gemini-2.5-flash'
});
const step2 = new LlmAgent ({
name: 'step2' ,
description: 'Second step' ,
instruction: 'Use this data: {step1_result}' ,
model: 'gemini-2.5-flash'
});
In LangGraph, use descriptive node names:
// Good
const nodes = [
{ name: 'validate_input' , agent: validator },
{ name: 'process_data' , agent: processor },
{ name: 'handle_error' , agent: errorHandler }
];
// Less clear
const nodes = [
{ name: 'step1' , agent: validator },
{ name: 'step2' , agent: processor },
{ name: 'step3' , agent: errorHandler }
];
Always set maxSteps for LangGraph to prevent infinite loops:
const workflow = new LangGraphAgent ({
name: 'workflow' ,
description: 'Workflow' ,
nodes ,
rootNode: 'start' ,
maxSteps: 50 // Required!
});
Test Conditions Thoroughly
Ensure conditions are robust:
{
name : 'next_step' ,
agent : nextAgent ,
condition : async ( lastEvent , context ) => {
// Good: Check for existence and type
const value = context . state . get ( 'key' );
if ( value === undefined || value === null ) {
return false ;
}
return typeof value === 'string' && value . length > 0 ;
}
}
Example: Complete Workflow
import { AgentBuilder , LlmAgent , type LangGraphNode } from '@iqai/adk' ;
import { z } from 'zod' ;
// Define agents
const inputValidator = new LlmAgent ({
name: 'validator' ,
description: 'Validates input data' ,
instruction: 'Check if the input is valid' ,
outputKey: 'validation_result' ,
outputSchema: z . object ({
isValid: z . boolean (),
errors: z . array ( z . string ())
}),
model: 'gemini-2.5-flash'
});
const dataProcessor = new LlmAgent ({
name: 'processor' ,
description: 'Processes validated data' ,
instruction: 'Process this data: {validation_result}' ,
outputKey: 'processed_data' ,
model: 'gemini-2.5-flash'
});
const errorHandler = new LlmAgent ({
name: 'error_handler' ,
description: 'Handles validation errors' ,
instruction: 'Explain these errors: {validation_result}' ,
model: 'gemini-2.5-flash'
});
const finalizer = new LlmAgent ({
name: 'finalizer' ,
description: 'Finalizes processing' ,
instruction: 'Create final output from: {processed_data}' ,
model: 'gemini-2.5-flash'
});
// Define graph
const nodes : LangGraphNode [] = [
{
name: 'validate' ,
agent: inputValidator ,
targets: [ 'process' , 'handle_error' ]
},
{
name: 'process' ,
agent: dataProcessor ,
condition : async ( lastEvent , context ) => {
const result = context . state . get ( 'validation_result' );
return result ?. isValid === true ;
},
targets: [ 'finalize' ]
},
{
name: 'handle_error' ,
agent: errorHandler ,
condition : async ( lastEvent , context ) => {
const result = context . state . get ( 'validation_result' );
return result ?. isValid === false ;
}
},
{
name: 'finalize' ,
agent: finalizer
}
];
// Build workflow
const { runner } = await AgentBuilder
. create ( 'data-workflow' )
. asLangGraph ( nodes , 'validate' )
. withQuickSession ({ state: {} })
. build ();
// Execute
const result = await runner . ask ( 'Process this data: ...' );
Next Steps
LoopAgent Iterative agent execution
LangGraphAgent Detailed LangGraph documentation
State Management Managing state across agents
Examples Multi-agent system examples