Overview
DurableAgent is a class for building AI agents that maintain state across workflow executions. It wraps AI model providers with durable execution capabilities, ensuring that your AI agents can survive interruptions, handle long-running operations, and automatically recover from failures.
Constructor
const agent = new DurableAgent ( options )
options
DurableAgentOptions
required
Configuration for the durable agent model
string | (() => Promise<LanguageModel>)
required
The AI model to use. Can be:
A string for AI Gateway (e.g., 'anthropic/claude-opus')
A function returning a model instance from a provider
Tools available to the agent. Each tool should have:
description: Human-readable description
inputSchema: Zod schema for validation
execute: Function to run (can be a workflow step)
System prompt to guide the agent’s behavior
Strategy for tool selection. Options: 'auto', 'required', 'none', or { type: 'tool', toolName: string } Default: 'auto'
Maximum tokens to generate in responses
Sampling temperature (0-1+). Higher values increase randomness. Recommended: Set either temperature or topP, not both
Nucleus sampling probability (0-1). Only tokens with top P probability mass are considered. Recommended: Set either temperature or topP, not both
Only sample from top K options. Advanced use only.
Penalty for repeating information (-1 to 1). 0 means no penalty.
Penalty for repeating words/phrases (-1 to 1). 0 means no penalty.
Stop generation when these sequences are encountered
Random seed for deterministic generation (if supported by model)
Maximum retry attempts for transient failures
Observability configuration for tracing and metrics
Methods
stream()
Streams AI responses with tool execution and state management.
const result = await agent . stream ( options )
options
DurableAgentStreamOptions
required
Conversation history in AI SDK format. Each message has:
role: 'user', 'assistant', or 'system'
content: String or array of content parts
writable
WritableStream<UIMessageChunk>
required
Stream to write response chunks. Use getWritable() from workflow package.
Override the system prompt for this request
Keep stream open after completion (useful for multiple writes)
Send a ‘start’ chunk at the beginning of the stream
Send a ‘finish’ chunk at the end of the stream
Maximum number of sequential LLM calls. Prevents infinite loops.
stopWhen
StopCondition | StopCondition[]
Conditions to stop generation early (e.g., when specific tools are called)
Override tool selection strategy for this request
Limit available tools to this subset
Parse structured output from the response. Use Output.object({ schema }) or Output.text().
Accumulate UIMessage[] during streaming. Result will include uiMessages property.
Include raw provider chunks in the stream for advanced use cases
Callback before each LLM call. Use for context management or dynamic configuration. prepareStep : async ({ messages , stepNumber }) => {
// Inject messages, change model, etc.
return { messages: [ ... messages , ... injectedMessages ] };
}
onStepFinish
StreamTextOnStepFinishCallback
Called after each LLM step completes
onFinish
StreamTextOnFinishCallback
Called when all steps complete successfully
onError
StreamTextOnErrorCallback
Called when an error occurs during streaming
onAbort
StreamTextOnAbortCallback
Called when the operation is aborted
Context passed to tool execution functions
experimental_repairToolCall
Function to repair failed tool call parsing
experimental_transform
StreamTextTransform | StreamTextTransform[]
Custom stream transformations
Custom URL download handler
Returns: Promise<DurableAgentStreamResult>
Final conversation messages including all tool calls and results
Details for each LLM step executed during the stream
Parsed structured output (only when experimental_output is specified)
Accumulated UI messages (only when collectUIMessages: true)
Examples
Basic Usage
import { DurableAgent } from '@workflow/ai' ;
import { anthropic } from '@workflow/ai/providers/anthropic' ;
import { getWritable } from 'workflow' ;
export async function chat () {
'use workflow' ;
const agent = new DurableAgent ({
model: anthropic ({ apiKey: process . env . ANTHROPIC_API_KEY })( 'claude-3-5-sonnet-20241022' ),
system: 'You are a helpful coding assistant.' ,
temperature: 0.7 ,
});
const result = await agent . stream ({
messages: [
{ role: 'user' , content: 'Explain how async/await works in JavaScript' },
],
writable: getWritable (),
});
console . log ( 'Generated' , result . steps . length , 'steps' );
}
import { DurableAgent } from '@workflow/ai' ;
import { openai } from '@workflow/ai/providers/openai' ;
import { getWritable } from 'workflow' ;
import { z } from 'zod' ;
async function searchDatabase ( query : string ) {
'use step' ;
// This runs as a durable step with automatic retries
const results = await db . search ( query );
return results ;
}
export async function assistantWorkflow () {
'use workflow' ;
const agent = new DurableAgent ({
model: openai ({ apiKey: process . env . OPENAI_API_KEY })( 'gpt-4o' ),
tools: {
searchDatabase: {
description: 'Search the customer database' ,
inputSchema: z . object ({
query: z . string (). describe ( 'Search query' ),
}),
execute : async ({ query }) => searchDatabase ( query ),
},
},
});
await agent . stream ({
messages: [
{ role: 'user' , content: 'Find customers in San Francisco' },
],
writable: getWritable (),
maxSteps: 5 ,
});
}
Structured Output
import { DurableAgent , Output } from '@workflow/ai' ;
import { google } from '@workflow/ai/providers/google' ;
import { getWritable } from 'workflow' ;
import { z } from 'zod' ;
export async function analyzeSentiment () {
'use workflow' ;
const agent = new DurableAgent ({
model: google ({ apiKey: process . env . GOOGLE_API_KEY })( 'gemini-2.0-flash-exp' ),
});
const result = await agent . stream ({
messages: [
{ role: 'user' , content: 'This product is amazing! I love it.' },
],
writable: getWritable (),
experimental_output: Output . object ({
schema: z . object ({
sentiment: z . enum ([ 'positive' , 'negative' , 'neutral' ]),
confidence: z . number (). min ( 0 ). max ( 1 ),
reasoning: z . string (),
}),
}),
});
console . log ( result . experimental_output );
// { sentiment: 'positive', confidence: 0.95, reasoning: '...' }
}
Dynamic Context Management
import { DurableAgent } from '@workflow/ai' ;
import { anthropic } from '@workflow/ai/providers/anthropic' ;
import { getWritable } from 'workflow' ;
export async function contextualChat () {
'use workflow' ;
const agent = new DurableAgent ({
model: anthropic ({ apiKey: process . env . ANTHROPIC_API_KEY })( 'claude-3-5-sonnet-20241022' ),
});
await agent . stream ({
messages: [
{ role: 'user' , content: 'Help me with my code' },
],
writable: getWritable (),
prepareStep : async ({ messages , stepNumber }) => {
// Inject context from external sources before each LLM call
if ( stepNumber === 0 ) {
const context = await loadUserContext ();
return {
messages: [
{ role: 'system' , content: `User context: ${ context } ` },
... messages ,
],
};
}
return {};
},
});
}
Type Definitions
type ToolSet = Record < string , {
description : string ;
inputSchema : ZodSchema ;
execute ?: ( input : any , context : {
toolCallId : string ;
messages : ModelMessage [];
experimental_context ?: unknown ;
}) => Promise < any > | any ;
}>;
ModelMessage
type ModelMessage = {
role : 'user' | 'assistant' | 'system' ;
content : string | Array <{
type : 'text' | 'image' ;
text ?: string ;
image ?: string | Uint8Array | URL ;
}>;
};
StepResult
type StepResult = {
text : string ;
toolCalls : ToolCall [];
toolResults : ToolResult [];
finishReason : 'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other' ;
usage : {
promptTokens : number ;
completionTokens : number ;
totalTokens : number ;
};
response : {
id : string ;
model : string ;
timestamp : Date ;
};
};
Best Practices
Use workflow steps for tools : Mark tool execute functions with 'use step' for automatic retries and durability
Set maxSteps : Always set a reasonable maxSteps limit to prevent infinite loops
Handle errors gracefully : Use onError callback to log and handle errors appropriately
Manage context size : Use prepareStep to inject/remove messages dynamically and manage context window
Stream to the client : Always use getWritable() to stream responses for better UX
Choose the right model : Use prepareStep to switch models based on task complexity
See Also