Skip to main content

Overview

DurableAgent is a class for building AI agents that maintain state across workflow executions. It wraps AI model providers with durable execution capabilities, ensuring that your AI agents can survive interruptions, handle long-running operations, and automatically recover from failures.

Constructor

const agent = new DurableAgent(options)
options
DurableAgentOptions
required
Configuration for the durable agent

Methods

stream()

Streams AI responses with tool execution and state management.
const result = await agent.stream(options)
options
DurableAgentStreamOptions
required
Returns: Promise<DurableAgentStreamResult>
messages
ModelMessage[]
Final conversation messages including all tool calls and results
steps
StepResult[]
Details for each LLM step executed during the stream
experimental_output
OUTPUT
Parsed structured output (only when experimental_output is specified)
uiMessages
UIMessage[]
Accumulated UI messages (only when collectUIMessages: true)

Examples

Basic Usage

import { DurableAgent } from '@workflow/ai';
import { anthropic } from '@workflow/ai/providers/anthropic';
import { getWritable } from 'workflow';

export async function chat() {
  'use workflow';

  const agent = new DurableAgent({
    model: anthropic({ apiKey: process.env.ANTHROPIC_API_KEY })('claude-3-5-sonnet-20241022'),
    system: 'You are a helpful coding assistant.',
    temperature: 0.7,
  });

  const result = await agent.stream({
    messages: [
      { role: 'user', content: 'Explain how async/await works in JavaScript' },
    ],
    writable: getWritable(),
  });

  console.log('Generated', result.steps.length, 'steps');
}

With Tools

import { DurableAgent } from '@workflow/ai';
import { openai } from '@workflow/ai/providers/openai';
import { getWritable } from 'workflow';
import { z } from 'zod';

async function searchDatabase(query: string) {
  'use step';
  // This runs as a durable step with automatic retries
  const results = await db.search(query);
  return results;
}

export async function assistantWorkflow() {
  'use workflow';

  const agent = new DurableAgent({
    model: openai({ apiKey: process.env.OPENAI_API_KEY })('gpt-4o'),
    tools: {
      searchDatabase: {
        description: 'Search the customer database',
        inputSchema: z.object({
          query: z.string().describe('Search query'),
        }),
        execute: async ({ query }) => searchDatabase(query),
      },
    },
  });

  await agent.stream({
    messages: [
      { role: 'user', content: 'Find customers in San Francisco' },
    ],
    writable: getWritable(),
    maxSteps: 5,
  });
}

Structured Output

import { DurableAgent, Output } from '@workflow/ai';
import { google } from '@workflow/ai/providers/google';
import { getWritable } from 'workflow';
import { z } from 'zod';

export async function analyzeSentiment() {
  'use workflow';

  const agent = new DurableAgent({
    model: google({ apiKey: process.env.GOOGLE_API_KEY })('gemini-2.0-flash-exp'),
  });

  const result = await agent.stream({
    messages: [
      { role: 'user', content: 'This product is amazing! I love it.' },
    ],
    writable: getWritable(),
    experimental_output: Output.object({
      schema: z.object({
        sentiment: z.enum(['positive', 'negative', 'neutral']),
        confidence: z.number().min(0).max(1),
        reasoning: z.string(),
      }),
    }),
  });

  console.log(result.experimental_output);
  // { sentiment: 'positive', confidence: 0.95, reasoning: '...' }
}

Dynamic Context Management

import { DurableAgent } from '@workflow/ai';
import { anthropic } from '@workflow/ai/providers/anthropic';
import { getWritable } from 'workflow';

export async function contextualChat() {
  'use workflow';

  const agent = new DurableAgent({
    model: anthropic({ apiKey: process.env.ANTHROPIC_API_KEY })('claude-3-5-sonnet-20241022'),
  });

  await agent.stream({
    messages: [
      { role: 'user', content: 'Help me with my code' },
    ],
    writable: getWritable(),
    prepareStep: async ({ messages, stepNumber }) => {
      // Inject context from external sources before each LLM call
      if (stepNumber === 0) {
        const context = await loadUserContext();
        return {
          messages: [
            { role: 'system', content: `User context: ${context}` },
            ...messages,
          ],
        };
      }
      return {};
    },
  });
}

Type Definitions

ToolSet

type ToolSet = Record<string, {
  description: string;
  inputSchema: ZodSchema;
  execute?: (input: any, context: {
    toolCallId: string;
    messages: ModelMessage[];
    experimental_context?: unknown;
  }) => Promise<any> | any;
}>;

ModelMessage

type ModelMessage = {
  role: 'user' | 'assistant' | 'system';
  content: string | Array<{
    type: 'text' | 'image';
    text?: string;
    image?: string | Uint8Array | URL;
  }>;
};

StepResult

type StepResult = {
  text: string;
  toolCalls: ToolCall[];
  toolResults: ToolResult[];
  finishReason: 'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other';
  usage: {
    promptTokens: number;
    completionTokens: number;
    totalTokens: number;
  };
  response: {
    id: string;
    model: string;
    timestamp: Date;
  };
};

Best Practices

  1. Use workflow steps for tools: Mark tool execute functions with 'use step' for automatic retries and durability
  2. Set maxSteps: Always set a reasonable maxSteps limit to prevent infinite loops
  3. Handle errors gracefully: Use onError callback to log and handle errors appropriately
  4. Manage context size: Use prepareStep to inject/remove messages dynamically and manage context window
  5. Stream to the client: Always use getWritable() to stream responses for better UX
  6. Choose the right model: Use prepareStep to switch models based on task complexity

See Also

Build docs developers (and LLMs) love