Skip to main content
The Workflow AI SDK provides provider functions that wrap AI SDK providers with durable execution capabilities. Each provider function creates a step function that returns a language model instance, ensuring proper state management in workflow contexts.

Overview

All provider functions follow the same pattern:
import { provider } from '@workflow/ai/providers/{provider}';

const modelFunction = provider(options);
const agent = new DurableAgent({
  model: modelFunction(modelId),
});
The provider wrappers automatically mark the model instantiation as a workflow step, ensuring durability.

Anthropic

Integrate with Claude models from Anthropic.

Import

import { anthropic } from '@workflow/ai/providers/anthropic';

Usage

import { DurableAgent } from '@workflow/ai';
import { anthropic } from '@workflow/ai/providers/anthropic';

const agent = new DurableAgent({
  model: anthropic({
    apiKey: process.env.ANTHROPIC_API_KEY,
  })('claude-3-5-sonnet-20241022'),
});

Signature

function anthropic(options?: {
  apiKey?: string;
  baseURL?: string;
  headers?: Record<string, string>;
}): (modelId: string) => Promise<LanguageModel>
options
object
Returns: Function that accepts a model ID and returns a Promise resolving to a LanguageModel

Available Models

  • claude-3-5-sonnet-20241022 - Latest Sonnet (most balanced)
  • claude-3-5-haiku-20241022 - Fast and efficient
  • claude-3-opus-20240229 - Most capable (legacy)
  • claude-3-sonnet-20240229 - Balanced (legacy)
  • claude-3-haiku-20240307 - Fast (legacy)
See Anthropic’s model documentation for the latest models.

Example

import { DurableAgent } from '@workflow/ai';
import { anthropic } from '@workflow/ai/providers/anthropic';
import { getWritable } from 'workflow';

export async function claudeWorkflow() {
  'use workflow';

  const agent = new DurableAgent({
    model: anthropic({
      apiKey: process.env.ANTHROPIC_API_KEY,
    })('claude-3-5-sonnet-20241022'),
    system: 'You are a helpful assistant.',
    temperature: 0.7,
  });

  await agent.stream({
    messages: [{ role: 'user', content: 'Hello!' }],
    writable: getWritable(),
  });
}

OpenAI

Integrate with GPT models from OpenAI.

Import

import { openai } from '@workflow/ai/providers/openai';

Usage

import { DurableAgent } from '@workflow/ai';
import { openai } from '@workflow/ai/providers/openai';

const agent = new DurableAgent({
  model: openai({
    apiKey: process.env.OPENAI_API_KEY,
  })('gpt-4o'),
});

Signature

function openai(options?: {
  apiKey?: string;
  organization?: string;
  project?: string;
  baseURL?: string;
  headers?: Record<string, string>;
}): (modelId: string) => Promise<LanguageModel>
options
object
Returns: Function that accepts a model ID and returns a Promise resolving to a LanguageModel

Available Models

  • gpt-4o - Latest and most capable GPT-4 model
  • gpt-4o-mini - Affordable and intelligent small model
  • gpt-4-turbo - Previous flagship model
  • gpt-3.5-turbo - Fast and cost-effective
  • o1-preview - Advanced reasoning (limited features)
  • o1-mini - Fast reasoning model
See OpenAI’s model documentation for the latest models.

Example

import { DurableAgent } from '@workflow/ai';
import { openai } from '@workflow/ai/providers/openai';
import { getWritable } from 'workflow';
import { z } from 'zod';

export async function gptWorkflow() {
  'use workflow';

  const agent = new DurableAgent({
    model: openai({
      apiKey: process.env.OPENAI_API_KEY,
    })('gpt-4o'),
    tools: {
      calculate: {
        description: 'Perform a calculation',
        inputSchema: z.object({
          expression: z.string(),
        }),
        execute: async ({ expression }) => {
          'use step';
          return eval(expression);
        },
      },
    },
  });

  await agent.stream({
    messages: [{ role: 'user', content: 'What is 12 * 34?' }],
    writable: getWritable(),
  });
}

Google AI

Integrate with Gemini models from Google.

Import

import { google } from '@workflow/ai/providers/google';

Usage

import { DurableAgent } from '@workflow/ai';
import { google } from '@workflow/ai/providers/google';

const agent = new DurableAgent({
  model: google({
    apiKey: process.env.GOOGLE_API_KEY,
  })('gemini-2.0-flash-exp'),
});

Signature

function google(options?: {
  apiKey?: string;
  baseURL?: string;
  headers?: Record<string, string>;
}): (modelId: string) => Promise<LanguageModel>
options
object
Returns: Function that accepts a model ID and returns a Promise resolving to a LanguageModel

Available Models

  • gemini-2.0-flash-exp - Latest experimental Flash model
  • gemini-1.5-pro - Most capable production model
  • gemini-1.5-flash - Fast and efficient
  • gemini-1.0-pro - Legacy production model
See Google’s model documentation for the latest models.

Example

import { DurableAgent } from '@workflow/ai';
import { google } from '@workflow/ai/providers/google';
import { getWritable } from 'workflow';

export async function geminiWorkflow() {
  'use workflow';

  const agent = new DurableAgent({
    model: google({
      apiKey: process.env.GOOGLE_API_KEY,
    })('gemini-2.0-flash-exp'),
    system: 'You are a creative writing assistant.',
  });

  await agent.stream({
    messages: [
      { role: 'user', content: 'Write a short story about a robot.' },
    ],
    writable: getWritable(),
  });
}

xAI

Integrate with Grok models from xAI.

Import

import { xai } from '@workflow/ai/providers/xai';

Usage

import { DurableAgent } from '@workflow/ai';
import { xai } from '@workflow/ai/providers/xai';

const agent = new DurableAgent({
  model: xai({
    apiKey: process.env.XAI_API_KEY,
  })('grok-2-latest'),
});

Signature

function xai(options?: {
  apiKey?: string;
  baseURL?: string;
  headers?: Record<string, string>;
}): (modelId: string) => Promise<LanguageModel>
options
object
Returns: Function that accepts a model ID and returns a Promise resolving to a LanguageModel

Available Models

  • grok-2-latest - Latest Grok 2 model
  • grok-2-1212 - Grok 2 from December 2024
  • grok-beta - Beta version with latest features
See xAI’s documentation for the latest models.

Example

import { DurableAgent } from '@workflow/ai';
import { xai } from '@workflow/ai/providers/xai';
import { getWritable } from 'workflow';

export async function grokWorkflow() {
  'use workflow';

  const agent = new DurableAgent({
    model: xai({
      apiKey: process.env.XAI_API_KEY,
    })('grok-2-latest'),
    temperature: 0.8,
  });

  await agent.stream({
    messages: [
      { role: 'user', content: 'Explain quantum computing in simple terms' },
    ],
    writable: getWritable(),
  });
}

Using AI Gateway

Instead of provider functions, you can use Vercel AI Gateway with a string model identifier:
import { DurableAgent } from '@workflow/ai';

const agent = new DurableAgent({
  model: 'anthropic/claude-3-5-sonnet-20241022',
  // AI Gateway handles authentication and routing
});
This approach:
  • Centralizes API key management
  • Provides built-in caching and rate limiting
  • Enables easy provider switching
  • Requires AI Gateway configuration

Provider Comparison

ProviderStrengthsBest For
AnthropicLong context, safety, reasoningComplex analysis, content moderation
OpenAIGeneral capabilities, ecosystemWide range of tasks, familiar API
GoogleMultimodal, speedImage/video analysis, fast responses
xAIReasoning, real-time knowledgeProblem-solving, current events

Best Practices

  1. Use environment variables: Store API keys in environment variables, never hardcode them
  2. Choose the right model: Use smaller/faster models for simple tasks, larger models for complex reasoning
  3. Set appropriate limits: Configure maxOutputTokens based on your use case
  4. Handle rate limits: Workflow steps automatically retry on rate limits
  5. Monitor costs: Track token usage through step results and telemetry
  6. Test locally: Use local development mode to test without consuming API credits

See Also

Build docs developers (and LLMs) love