Quick Start
Set a default model when initializing Stagehand:Model Configuration Options
You can configure models in two ways:1. String Model Name
Use a simple string for supported models:2. Model Configuration Object
For advanced configuration, use an object:Supported Model Providers
OpenAI
Stagehand supports all OpenAI models:gpt-4.1- Latest GPT-4.1 modelgpt-4.1-mini- Fast, cost-efficient GPT-4.1gpt-4.1-nano- Ultra-fast, minimal costgpt-4o- GPT-4 Optimizedgpt-4o-mini- GPT-4 Optimized Minigpt-4o-2024-08-06- Specific versiongpt-4.5-preview- Latest previewo1- OpenAI O1 modelo1-mini- OpenAI O1 Minio1-preview- OpenAI O1 Previewo3- OpenAI O3 modelo3-mini- OpenAI O3 Minio4-mini- OpenAI O4 Mini
Anthropic
Use Claude models from Anthropic:claude-3-7-sonnet-latest- Latest Claude 3.7 Sonnetclaude-3-7-sonnet-20250219- Claude 3.7 Sonnet (Feb 2025)claude-3-5-sonnet-latest- Latest Claude 3.5 Sonnetclaude-3-5-sonnet-20241022- Claude 3.5 Sonnet (Oct 2024)claude-3-5-sonnet-20240620- Claude 3.5 Sonnet (June 2024)
Claude models support extended thinking via the
thinkingBudget parameter for complex reasoning tasks.Google Gemini
Use Google’s Gemini models:gemini-2.5-flash-preview-04-17- Latest Gemini 2.5 Flash Previewgemini-2.5-pro-preview-03-25- Latest Gemini 2.5 Pro Previewgemini-2.0-flash- Gemini 2.0 Flashgemini-2.0-flash-lite- Lightweight Gemini 2.0gemini-1.5-pro- Gemini 1.5 Progemini-1.5-flash- Gemini 1.5 Flashgemini-1.5-flash-8b- Compact Gemini 1.5
Google Vertex AI
For Google Vertex AI with service account credentials:Cerebras
Use Cerebras for ultra-fast inference:cerebras-llama-3.3-70b- Llama 3.3 70Bcerebras-llama-3.1-8b- Llama 3.1 8B
Groq
Use Groq for fast inference:groq-llama-3.3-70b-versatile- Llama 3.3 70B Versatilegroq-llama-3.3-70b-specdec- Llama 3.3 70B SpecDec
Advanced Configuration
Custom Base URL
Use OpenAI-compatible APIs:Temperature Control
Adjust model creativity (0.0 = deterministic, 2.0 = creative):Organization ID (OpenAI)
Specify OpenAI organization:Extended Thinking (Anthropic)
Enable extended thinking for complex reasoning:Per-Operation Model Override
Override the model for specific operations:Custom LLM Client
Provide your own LLM client implementation:Environment Variables
Set API keys via environment variables:apiKey:
Model Selection Best Practices
Fast Actions
Use lightweight models for simple clicks and navigation:
gpt-4o-minigpt-4.1-nanogemini-2.0-flash-lite
Complex Extraction
Use powerful models for data extraction:
claude-3-7-sonnet-latestgpt-4.1gemini-2.5-pro-preview-03-25
Cost Optimization
Balance performance and cost:
- Use mini/nano models for repetitive tasks
- Cache agent actions to reduce LLM calls
- Use
temperature: 0for deterministic results
Reasoning Tasks
Use reasoning-focused models:
o1/o3series for complex logicclaude-3-7-sonnet-latestwiththinkingBudget