Overview
Themodels command displays all AI models available from your configured providers. Use this to:
- Discover available models
- Find exact model names for configuration
- Filter models by provider
- View model metadata and pricing
Usage
provider/model, which is the format used in configuration files and the --model flag.
Options
Optional provider ID to filter models. Only shows models from this provider.
Refresh the models cache from models.dev. Use this when new models are released.
Show detailed model information including pricing, context windows, and capabilities.
Examples
List All Models
Show models from all configured providers:Filter by Provider
Show only Anthropic models:Show Detailed Information
View model metadata:Refresh Model Cache
Update the cached model list from models.dev:- New models are released
- Model pricing changes
- You add a new provider
Model Format
All models follow theprovider/model format:
anthropic/claude-4.5-sonnetopenai/gpt-4ogoogle/gemini-2.0-flash-expopencode/claude-max
Using Model Names
In Configuration
Set default model inopencode.json:
With CLI Flags
Specify model when starting OpenCode:Model Switching
Switch models mid-session using the/model command:
Provider Order
Models are displayed in a specific order:- OpenCode providers (e.g.,
opencode/claude-max) - shown first - Other providers - alphabetically by provider ID
Model Information
With--verbose, you can see:
- Context window: Maximum input tokens
- Max tokens: Maximum output tokens
- Pricing: Input and output costs per 1K tokens
- Capabilities: Features like vision, function calling, streaming
- Modalities: Supported input/output types
Understanding Provider IDs
Common provider IDs:| Provider | ID | Authentication |
|---|---|---|
| OpenCode | opencode | OpenCode API key |
| Anthropic | anthropic | Claude API key |
| OpenAI | openai | OpenAI API key |
google | Google AI API key | |
| GitHub Copilot | github-copilot | GitHub Copilot subscription |
| OpenRouter | openrouter | OpenRouter API key |
| Vercel | vercel | Vercel API key |
| Amazon Bedrock | amazon-bedrock | AWS credentials |
Models.dev
OpenCode uses models.dev as the central registry for AI models. This provides:- Unified model metadata
- Up-to-date pricing information
- Capability and feature information
- Support for 50+ providers
Cache Management
Model information is cached locally for performance. The cache:- Updates automatically on first run each day
- Can be manually refreshed with
--refresh - Is stored in
~/.local/share/opencode/models.json
Provider Configuration
Before using models from a provider, you must authenticate:Filtering and Searching
While the command doesn’t have built-in search, you can use standard Unix tools:Search by Name
Count Models
Show Specific Range
Troubleshooting
No Models Shown
Problem: Command returns empty or shows no models Solutions:- Run
opencode models --refreshto update cache - Check internet connectivity
- Verify you’ve authenticated with
opencode auth login - Check provider is enabled in config
Provider Not Found
Problem:Provider not found: <provider>
Solutions:
- Check provider ID spelling
- Run
opencode modelswithout arguments to see available providers - Ensure provider is configured in
opencode.json - Authenticate with the provider:
opencode auth login
Models Out of Date
Problem: New models missing from list Solutions:- Run
opencode models --refresh - Check models.dev for latest models
- Verify OpenCode is up to date:
opencode upgrade
Environment Variables
The models command respects these environment variables:| Variable | Description |
|---|---|
OPENCODE_DISABLE_MODELS_FETCH | Disable fetching models from remote sources |
OPENCODE_MODELS_URL | Custom URL for fetching model configuration |
OPENCODE_ENABLE_EXPERIMENTAL_MODELS | Show experimental/beta models |
Related Commands
auth
Authenticate with model providers
Configuration
Configure default models
Providers
Learn about available providers
TUI
Using models in the interface