Execution functions run agents and return their responses. Choose the right function based on your needs: streaming vs. non-streaming, single vs. multi-agent.
execute()
Stream agent responses in real-time. Best for interactive applications.
Signature
export async function execute < O , CIn , COut = CIn >(
agent : Agent < O , CIn , COut >,
messages : UIMessage [] | string ,
contextVariables : CIn ,
config ?: {
abortSignal ?: AbortSignal ;
providerOptions ?: Parameters < typeof streamText >[ 0 ][ 'providerOptions' ];
transform ?: StreamTextTransform < ToolSet > | StreamTextTransform < ToolSet >[];
},
) : Promise < StreamTextResult < ToolSet , any >>
Parameters
messages
UIMessage[] | string
required
User message(s). Can be:
Simple string: 'Hello!'
Single message: [user('Hello!')]
Conversation: [user('Hi'), assistant('Hello!'), user('Help me')]
Context to pass to the agent. Use {} if no context needed.
Signal to cancel execution. const controller = new AbortController ();
execute ( agent , 'Hi' , {}, { abortSignal: controller . signal });
Provider-specific options. providerOptions : {
openai : { reasoningEffort : 'medium' }
}
config.transform
StreamTextTransform | StreamTextTransform[]
Stream transformations to apply. import { smoothStream } from 'ai' ;
transform : smoothStream ()
Return Value
Returns a Promise<StreamTextResult> with:
Stream of text chunks. for await ( const chunk of stream . textStream ) {
process . stdout . write ( chunk );
}
Stream of all events (text, tool calls, tool results). for await ( const event of stream . fullStream ) {
if ( event . type === 'text-delta' ) {
console . log ( event . textDelta );
}
}
Convert to UI-compatible message stream. for await ( const chunk of stream . toUIMessageStream ()) {
// Process UI chunks
}
Complete text response (await to get full text). const text = await stream . text ;
Structured output (if agent has output schema). const output = await stream . output ;
partialOutputStream
AsyncIterable<Partial<Output>>
Stream of partial structured output. for await ( const partial of stream . partialOutputStream ) {
console . log ( 'Partial:' , partial );
}
Token usage information. const usage = await stream . totalUsage ;
// { promptTokens: 100, completionTokens: 50, totalTokens: 150 }
Sources cited (if applicable). const sources = await stream . sources ;
Example
import { openai } from '@ai-sdk/openai' ;
import { agent , execute } from '@deepagents/agent' ;
const assistant = agent ({
name: 'assistant' ,
model: openai ( 'gpt-4o' ),
prompt: 'You are a helpful assistant.' ,
});
const stream = await execute ( assistant , 'Tell me a joke' , {});
// Stream text
for await ( const chunk of stream . textStream ) {
process . stdout . write ( chunk );
}
// Or get full text
const text = await stream . text ;
console . log ( text );
// Check usage
const usage = await stream . totalUsage ;
console . log ( 'Tokens:' , usage . totalTokens );
stream()
Alias for execute().
export const stream = execute ;
generate()
Non-streaming execution. Returns complete response in one call. Best for batch processing.
Signature
export async function generate < O , CIn , COut = CIn >(
agent : Agent < O , CIn , COut >,
messages : UIMessage [] | string ,
contextVariables : CIn ,
config ?: {
abortSignal ?: AbortSignal ;
providerOptions ?: Parameters < typeof generateText >[ 0 ][ 'providerOptions' ];
},
) : Promise < GenerateTextResult < ToolSet , any >>
Parameters
Same as execute(), except no transform option (since it’s not streaming).
Return Value
Returns a Promise<GenerateTextResult> with:
Complete text response. const result = await generate ( agent , 'Hello' , {});
console . log ( result . text );
Structured output (if agent has output schema). const result = await generate ( analyzer , 'Great product!' , {});
console . log ( result . output . sentiment ); // 'positive'
Token usage information. console . log ( result . usage . totalTokens );
Execution steps including tool calls. result . steps . forEach ( step => {
console . log ( 'Step:' , step . toolCalls );
});
Example
import { openai } from '@ai-sdk/openai' ;
import { agent , generate } from '@deepagents/agent' ;
import { z } from 'zod' ;
const analyzer = agent ({
name: 'analyzer' ,
model: openai ( 'gpt-4o' ),
prompt: 'Analyze sentiment.' ,
output: z . object ({
sentiment: z . enum ([ 'positive' , 'negative' , 'neutral' ]),
confidence: z . number (),
}),
});
const result = await generate ( analyzer , 'I love this!' , {});
console . log ( result . text );
console . log ( result . output ); // { sentiment: 'positive', confidence: 0.95 }
console . log ( 'Tokens:' , result . usage . totalTokens );
swarm()
High-level streaming execution with automatic handoff support. Best for multi-agent systems.
Signature
export function swarm < CIn >(
agent : Agent < unknown , CIn , any >,
messages : UIMessage [] | string ,
contextVariables : CIn ,
abortSignal ?: AbortSignal ,
)
Parameters
The root/coordinator agent.
messages
UIMessage[] | string
required
User message(s).
Context to pass through the agent chain.
Signal to cancel execution.
Return Value
Returns a ReadableStream of UI message chunks. Use with a UI message stream consumer.
Example
import { openai } from '@ai-sdk/openai' ;
import { agent , instructions , swarm } from '@deepagents/agent' ;
const researcher = agent ({
name: 'researcher' ,
model: openai ( 'gpt-4o' ),
prompt: 'Research topics thoroughly.' ,
handoffDescription: 'Handles research tasks' ,
});
const writer = agent ({
name: 'writer' ,
model: openai ( 'gpt-4o' ),
prompt: 'Write engaging content.' ,
handoffDescription: 'Handles writing tasks' ,
});
const coordinator = agent ({
name: 'coordinator' ,
model: openai ( 'gpt-4o' ),
prompt: instructions . swarm ({
purpose: [ 'Coordinate research and writing' ],
routine: [
'Use transfer_to_researcher for facts' ,
'Use transfer_to_writer for content' ,
],
}),
handoffs: [ researcher , writer ],
});
const stream = swarm ( coordinator , 'Write a blog post about AI' , {});
for await ( const chunk of stream ) {
if ( chunk . type === 'text-delta' ) {
process . stdout . write ( chunk . delta );
}
}
Comparison
Feature execute() generate() swarm() Streaming ✅ Yes ❌ No ✅ Yes Real-time output ✅ Yes ❌ No ✅ Yes Multi-agent handoffs ⚠️ Partial ❌ No ✅ Full support Structured output ✅ Yes ✅ Yes ✅ Yes Token usage ✅ Yes ✅ Yes ✅ Yes Best for Interactive apps Batch processing Multi-agent workflows
When to Use Which
Use execute() when:
Building interactive chat interfaces
Need real-time streaming
Want to show progress to users
Single agent or simple workflows
Use generate() when:
Processing in batches
Don’t need streaming
Want simpler code (single await)
Background processing
Use swarm() when:
Building multi-agent systems
Agents need to hand off to each other
Coordinating multiple specialists
Need full handoff tracking
Complete Example
import { openai } from '@ai-sdk/openai' ;
import { agent , execute , generate , swarm , instructions } from '@deepagents/agent' ;
import { tool } from 'ai' ;
import { z } from 'zod' ;
// Simple agent
const simple = agent ({
name: 'simple' ,
model: openai ( 'gpt-4o' ),
prompt: 'You are helpful.' ,
});
// Streaming
const stream1 = await execute ( simple , 'Tell me a joke' , {});
for await ( const chunk of stream1 . textStream ) {
process . stdout . write ( chunk );
}
// Non-streaming
const result = await generate ( simple , 'What is 2+2?' , {});
console . log ( result . text );
// Structured output
const analyzer = agent ({
name: 'analyzer' ,
model: openai ( 'gpt-4o' ),
prompt: 'Analyze sentiment.' ,
output: z . object ({
sentiment: z . enum ([ 'positive' , 'negative' , 'neutral' ]),
}),
});
const analysis = await generate ( analyzer , 'I love this!' , {});
console . log ( analysis . output . sentiment ); // 'positive'
// Multi-agent
const specialist1 = agent ({
name: 'specialist_1' ,
model: openai ( 'gpt-4o' ),
prompt: 'You specialize in task A.' ,
handoffDescription: 'Handles task A' ,
});
const specialist2 = agent ({
name: 'specialist_2' ,
model: openai ( 'gpt-4o' ),
prompt: 'You specialize in task B.' ,
handoffDescription: 'Handles task B' ,
});
const coordinator = agent ({
name: 'coordinator' ,
model: openai ( 'gpt-4o' ),
prompt: instructions . swarm ({
purpose: [ 'Coordinate specialists' ],
routine: [ 'Delegate to appropriate specialist' ],
}),
handoffs: [ specialist1 , specialist2 ],
});
const stream2 = swarm ( coordinator , 'Complete complex task' , {});
for await ( const chunk of stream2 ) {
if ( chunk . type === 'text-delta' ) {
process . stdout . write ( chunk . delta );
}
}
Error Handling
try {
const stream = await execute ( agent , 'Help me' , {});
const text = await stream . text ;
console . log ( text );
} catch ( error ) {
if ( error . name === 'AbortError' ) {
console . log ( 'Execution was cancelled' );
} else if ( error . message . includes ( 'rate limit' )) {
console . log ( 'Rate limit exceeded' );
} else {
console . error ( 'Error:' , error . message );
}
}
See Also
Utilities Helper functions
Streaming Guide Learn about streaming