Skip to main content
Execution functions run agents and return their responses. Choose the right function based on your needs: streaming vs. non-streaming, single vs. multi-agent.

execute()

Stream agent responses in real-time. Best for interactive applications.

Signature

swarm.ts:119:170
export async function execute<O, CIn, COut = CIn>(
  agent: Agent<O, CIn, COut>,
  messages: UIMessage[] | string,
  contextVariables: CIn,
  config?: {
    abortSignal?: AbortSignal;
    providerOptions?: Parameters<typeof streamText>[0]['providerOptions'];
    transform?: StreamTextTransform<ToolSet> | StreamTextTransform<ToolSet>[];
  },
): Promise<StreamTextResult<ToolSet, any>>

Parameters

agent
Agent
required
The agent to execute.
messages
UIMessage[] | string
required
User message(s). Can be:
  • Simple string: 'Hello!'
  • Single message: [user('Hello!')]
  • Conversation: [user('Hi'), assistant('Hello!'), user('Help me')]
contextVariables
CIn
required
Context to pass to the agent. Use {} if no context needed.
config.abortSignal
AbortSignal
Signal to cancel execution.
const controller = new AbortController();
execute(agent, 'Hi', {}, { abortSignal: controller.signal });
config.providerOptions
object
Provider-specific options.
providerOptions: {
  openai: { reasoningEffort: 'medium' }
}
config.transform
StreamTextTransform | StreamTextTransform[]
Stream transformations to apply.
import { smoothStream } from 'ai';

transform: smoothStream()

Return Value

Returns a Promise<StreamTextResult> with:
textStream
AsyncIterable<string>
Stream of text chunks.
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}
fullStream
AsyncIterable<Event>
Stream of all events (text, tool calls, tool results).
for await (const event of stream.fullStream) {
  if (event.type === 'text-delta') {
    console.log(event.textDelta);
  }
}
toUIMessageStream()
() => ReadableStream
Convert to UI-compatible message stream.
for await (const chunk of stream.toUIMessageStream()) {
  // Process UI chunks
}
text
Promise<string>
Complete text response (await to get full text).
const text = await stream.text;
output
Promise<Output>
Structured output (if agent has output schema).
const output = await stream.output;
partialOutputStream
AsyncIterable<Partial<Output>>
Stream of partial structured output.
for await (const partial of stream.partialOutputStream) {
  console.log('Partial:', partial);
}
totalUsage
Promise<TokenUsage>
Token usage information.
const usage = await stream.totalUsage;
// { promptTokens: 100, completionTokens: 50, totalTokens: 150 }
sources
Promise<Source[]>
Sources cited (if applicable).
const sources = await stream.sources;

Example

import { openai } from '@ai-sdk/openai';
import { agent, execute } from '@deepagents/agent';

const assistant = agent({
  name: 'assistant',
  model: openai('gpt-4o'),
  prompt: 'You are a helpful assistant.',
});

const stream = await execute(assistant, 'Tell me a joke', {});

// Stream text
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

// Or get full text
const text = await stream.text;
console.log(text);

// Check usage
const usage = await stream.totalUsage;
console.log('Tokens:', usage.totalTokens);

stream()

Alias for execute().
swarm.ts:172
export const stream = execute;

generate()

Non-streaming execution. Returns complete response in one call. Best for batch processing.

Signature

swarm.ts:79:117
export async function generate<O, CIn, COut = CIn>(
  agent: Agent<O, CIn, COut>,
  messages: UIMessage[] | string,
  contextVariables: CIn,
  config?: {
    abortSignal?: AbortSignal;
    providerOptions?: Parameters<typeof generateText>[0]['providerOptions'];
  },
): Promise<GenerateTextResult<ToolSet, any>>

Parameters

Same as execute(), except no transform option (since it’s not streaming).

Return Value

Returns a Promise<GenerateTextResult> with:
text
string
Complete text response.
const result = await generate(agent, 'Hello', {});
console.log(result.text);
output
Output
Structured output (if agent has output schema).
const result = await generate(analyzer, 'Great product!', {});
console.log(result.output.sentiment); // 'positive'
usage
TokenUsage
Token usage information.
console.log(result.usage.totalTokens);
steps
StepResult[]
Execution steps including tool calls.
result.steps.forEach(step => {
  console.log('Step:', step.toolCalls);
});

Example

import { openai } from '@ai-sdk/openai';
import { agent, generate } from '@deepagents/agent';
import { z } from 'zod';

const analyzer = agent({
  name: 'analyzer',
  model: openai('gpt-4o'),
  prompt: 'Analyze sentiment.',
  output: z.object({
    sentiment: z.enum(['positive', 'negative', 'neutral']),
    confidence: z.number(),
  }),
});

const result = await generate(analyzer, 'I love this!', {});

console.log(result.text);
console.log(result.output); // { sentiment: 'positive', confidence: 0.95 }
console.log('Tokens:', result.usage.totalTokens);

swarm()

High-level streaming execution with automatic handoff support. Best for multi-agent systems.

Signature

swarm.ts:206:278
export function swarm<CIn>(
  agent: Agent<unknown, CIn, any>,
  messages: UIMessage[] | string,
  contextVariables: CIn,
  abortSignal?: AbortSignal,
)

Parameters

agent
Agent
required
The root/coordinator agent.
messages
UIMessage[] | string
required
User message(s).
contextVariables
CIn
required
Context to pass through the agent chain.
abortSignal
AbortSignal
Signal to cancel execution.

Return Value

Returns a ReadableStream of UI message chunks. Use with a UI message stream consumer.

Example

import { openai } from '@ai-sdk/openai';
import { agent, instructions, swarm } from '@deepagents/agent';

const researcher = agent({
  name: 'researcher',
  model: openai('gpt-4o'),
  prompt: 'Research topics thoroughly.',
  handoffDescription: 'Handles research tasks',
});

const writer = agent({
  name: 'writer',
  model: openai('gpt-4o'),
  prompt: 'Write engaging content.',
  handoffDescription: 'Handles writing tasks',
});

const coordinator = agent({
  name: 'coordinator',
  model: openai('gpt-4o'),
  prompt: instructions.swarm({
    purpose: ['Coordinate research and writing'],
    routine: [
      'Use transfer_to_researcher for facts',
      'Use transfer_to_writer for content',
    ],
  }),
  handoffs: [researcher, writer],
});

const stream = swarm(coordinator, 'Write a blog post about AI', {});

for await (const chunk of stream) {
  if (chunk.type === 'text-delta') {
    process.stdout.write(chunk.delta);
  }
}

Comparison

Featureexecute()generate()swarm()
Streaming✅ Yes❌ No✅ Yes
Real-time output✅ Yes❌ No✅ Yes
Multi-agent handoffs⚠️ Partial❌ No✅ Full support
Structured output✅ Yes✅ Yes✅ Yes
Token usage✅ Yes✅ Yes✅ Yes
Best forInteractive appsBatch processingMulti-agent workflows

When to Use Which

Use execute() when:

  • Building interactive chat interfaces
  • Need real-time streaming
  • Want to show progress to users
  • Single agent or simple workflows

Use generate() when:

  • Processing in batches
  • Don’t need streaming
  • Want simpler code (single await)
  • Background processing

Use swarm() when:

  • Building multi-agent systems
  • Agents need to hand off to each other
  • Coordinating multiple specialists
  • Need full handoff tracking

Complete Example

import { openai } from '@ai-sdk/openai';
import { agent, execute, generate, swarm, instructions } from '@deepagents/agent';
import { tool } from 'ai';
import { z } from 'zod';

// Simple agent
const simple = agent({
  name: 'simple',
  model: openai('gpt-4o'),
  prompt: 'You are helpful.',
});

// Streaming
const stream1 = await execute(simple, 'Tell me a joke', {});
for await (const chunk of stream1.textStream) {
  process.stdout.write(chunk);
}

// Non-streaming
const result = await generate(simple, 'What is 2+2?', {});
console.log(result.text);

// Structured output
const analyzer = agent({
  name: 'analyzer',
  model: openai('gpt-4o'),
  prompt: 'Analyze sentiment.',
  output: z.object({
    sentiment: z.enum(['positive', 'negative', 'neutral']),
  }),
});

const analysis = await generate(analyzer, 'I love this!', {});
console.log(analysis.output.sentiment); // 'positive'

// Multi-agent
const specialist1 = agent({
  name: 'specialist_1',
  model: openai('gpt-4o'),
  prompt: 'You specialize in task A.',
  handoffDescription: 'Handles task A',
});

const specialist2 = agent({
  name: 'specialist_2',
  model: openai('gpt-4o'),
  prompt: 'You specialize in task B.',
  handoffDescription: 'Handles task B',
});

const coordinator = agent({
  name: 'coordinator',
  model: openai('gpt-4o'),
  prompt: instructions.swarm({
    purpose: ['Coordinate specialists'],
    routine: ['Delegate to appropriate specialist'],
  }),
  handoffs: [specialist1, specialist2],
});

const stream2 = swarm(coordinator, 'Complete complex task', {});
for await (const chunk of stream2) {
  if (chunk.type === 'text-delta') {
    process.stdout.write(chunk.delta);
  }
}

Error Handling

try {
  const stream = await execute(agent, 'Help me', {});
  const text = await stream.text;
  console.log(text);
} catch (error) {
  if (error.name === 'AbortError') {
    console.log('Execution was cancelled');
  } else if (error.message.includes('rate limit')) {
    console.log('Rate limit exceeded');
  } else {
    console.error('Error:', error.message);
  }
}

See Also

agent()

Create agents

Utilities

Helper functions

Streaming Guide

Learn about streaming

Build docs developers (and LLMs) love