Overview
All provider functions follow the same pattern:Anthropic
Integrate with Claude models from Anthropic.Import
Usage
Signature
Available Models
claude-3-5-sonnet-20241022- Latest Sonnet (most balanced)claude-3-5-haiku-20241022- Fast and efficientclaude-3-opus-20240229- Most capable (legacy)claude-3-sonnet-20240229- Balanced (legacy)claude-3-haiku-20240307- Fast (legacy)
Example
OpenAI
Integrate with GPT models from OpenAI.Import
Usage
Signature
Available Models
gpt-4o- Latest and most capable GPT-4 modelgpt-4o-mini- Affordable and intelligent small modelgpt-4-turbo- Previous flagship modelgpt-3.5-turbo- Fast and cost-effectiveo1-preview- Advanced reasoning (limited features)o1-mini- Fast reasoning model
Example
Google AI
Integrate with Gemini models from Google.Import
Usage
Signature
Available Models
gemini-2.0-flash-exp- Latest experimental Flash modelgemini-1.5-pro- Most capable production modelgemini-1.5-flash- Fast and efficientgemini-1.0-pro- Legacy production model
Example
xAI
Integrate with Grok models from xAI.Import
Usage
Signature
Available Models
grok-2-latest- Latest Grok 2 modelgrok-2-1212- Grok 2 from December 2024grok-beta- Beta version with latest features
Example
Using AI Gateway
Instead of provider functions, you can use Vercel AI Gateway with a string model identifier:- Centralizes API key management
- Provides built-in caching and rate limiting
- Enables easy provider switching
- Requires AI Gateway configuration
Provider Comparison
| Provider | Strengths | Best For |
|---|---|---|
| Anthropic | Long context, safety, reasoning | Complex analysis, content moderation |
| OpenAI | General capabilities, ecosystem | Wide range of tasks, familiar API |
| Multimodal, speed | Image/video analysis, fast responses | |
| xAI | Reasoning, real-time knowledge | Problem-solving, current events |
Best Practices
- Use environment variables: Store API keys in environment variables, never hardcode them
- Choose the right model: Use smaller/faster models for simple tasks, larger models for complex reasoning
-
Set appropriate limits: Configure
maxOutputTokensbased on your use case - Handle rate limits: Workflow steps automatically retry on rate limits
- Monitor costs: Track token usage through step results and telemetry
- Test locally: Use local development mode to test without consuming API credits