Supported providers
OpenCode supports the following AI providers (in order of automatic detection priority):- GitHub Copilot - Free with GitHub Copilot subscription
- Anthropic - Claude models
- OpenAI - GPT models
- Google Gemini - Gemini models
- Groq - Fast inference for open-source models
- OpenRouter - Access to multiple models through one API
- AWS Bedrock - Claude models via AWS
- Azure OpenAI - OpenAI models via Azure
- Google Cloud VertexAI - Gemini models via Google Cloud
- xAI - Grok models
Anthropic
Provider for Claude models with industry-leading performance.Authentication
Your Anthropic API key. Get one at console.anthropic.com
Configuration
Supported models
| Model ID | Name | Context Window | Default Max Tokens | Cost (Input/Output per 1M tokens) | Features |
|---|---|---|---|---|---|
claude-4-sonnet | Claude 4 Sonnet | 200,000 | 50,000 | 15.00 | Extended thinking, attachments |
claude-4-opus | Claude 4 Opus | 200,000 | 4,096 | 75.00 | Attachments |
claude-3.7-sonnet | Claude 3.7 Sonnet | 200,000 | 50,000 | 15.00 | Extended thinking, attachments |
claude-3.5-sonnet | Claude 3.5 Sonnet | 200,000 | 5,000 | 15.00 | Attachments |
claude-3.5-haiku | Claude 3.5 Haiku | 200,000 | 4,096 | 4.00 | Attachments |
claude-3-opus | Claude 3 Opus | 200,000 | 4,096 | 75.00 | Attachments |
claude-3-haiku | Claude 3 Haiku | 200,000 | 4,096 | 1.25 | Attachments |
Extended thinking models support the
reasoningEffort parameter for deeper analysis.OpenAI
Provider for GPT models including the reasoning-capable o-series.Authentication
Your OpenAI API key. Get one at platform.openai.com
Configuration
Supported models
| Model ID | Name | Context Window | Default Max Tokens | Cost (Input/Output per 1M tokens) | Features |
|---|---|---|---|---|---|
gpt-4.1 | GPT 4.1 | 1,047,576 | 20,000 | 8.00 | Attachments, prompt caching |
gpt-4.1-mini | GPT 4.1 Mini | 200,000 | 20,000 | 1.60 | Attachments, prompt caching |
gpt-4.1-nano | GPT 4.1 Nano | 1,047,576 | 20,000 | 0.40 | Attachments, prompt caching |
gpt-4.5-preview | GPT 4.5 Preview | 128,000 | 15,000 | 150.00 | Attachments, prompt caching |
gpt-4o | GPT-4o | 128,000 | 4,096 | 10.00 | Attachments, prompt caching |
gpt-4o-mini | GPT-4o Mini | 128,000 | - | 0.60 | Attachments, prompt caching |
o1 | o1 | 200,000 | 50,000 | 60.00 | Reasoning, attachments, prompt caching |
o1-pro | o1 Pro | 200,000 | 50,000 | 600.00 | Reasoning, attachments |
o1-mini | o1 Mini | 128,000 | 50,000 | 4.40 | Reasoning, attachments, prompt caching |
o3 | o3 | 200,000 | - | 40.00 | Reasoning, attachments, prompt caching |
o3-mini | o3 Mini | 200,000 | 50,000 | 4.40 | Reasoning, prompt caching |
o4-mini | o4 Mini | 128,000 | 50,000 | 4.40 | Reasoning, attachments, prompt caching |
Reasoning models (o-series) support the
reasoningEffort parameter: low, medium, or high.Google Gemini
Provider for Gemini models with large context windows.Authentication
Your Google AI Studio API key. Get one at aistudio.google.com
Configuration
Supported models
| Model ID | Name | Context Window | Default Max Tokens | Cost (Input/Output per 1M tokens) | Features |
|---|---|---|---|---|---|
gemini-2.5 | Gemini 2.5 Pro | 1,000,000 | 50,000 | 10.00 | Attachments |
gemini-2.5-flash | Gemini 2.5 Flash | 1,000,000 | 50,000 | 0.60 | Attachments |
gemini-2.0-flash | Gemini 2.0 Flash | 1,000,000 | 6,000 | 0.40 | Attachments |
gemini-2.0-flash-lite | Gemini 2.0 Flash Lite | 1,000,000 | 6,000 | 0.30 | Attachments |
Groq
Provider for fast inference with open-source models.Authentication
Your Groq API key. Get one at console.groq.com
Configuration
Supported models
| Model ID | Name | Context Window | Cost (Input/Output per 1M tokens) |
|---|---|---|---|
qwen-qwq | Qwen QwQ 32B | 128,000 | 0.39 |
llama-3.3-70b-versatile | Llama 3.3 70B Versatile | 128,000 | 0.79 |
meta-llama/llama-4-scout-17b-16e-instruct | Llama 4 Scout | 128,000 | 0.34 |
meta-llama/llama-4-maverick-17b-128e-instruct | Llama 4 Maverick | 128,000 | 0.20 |
deepseek-r1-distill-llama-70b | DeepSeek R1 Distill | 128,000 | 0.99 |
OpenRouter
Provider for accessing multiple models through a single API.Authentication
Your OpenRouter API key. Get one at openrouter.ai/keys
Configuration
Supported models
OpenRouter provides access to models from multiple providers. Prefix model IDs withopenrouter.:
openrouter.gpt-4.1,openrouter.gpt-4.1-mini,openrouter.gpt-4.1-nanoopenrouter.o1,openrouter.o1-mini,openrouter.o3,openrouter.o3-mini,openrouter.o4-miniopenrouter.claude-3.7-sonnet,openrouter.claude-3.5-sonnet,openrouter.claude-3.5-haikuopenrouter.gemini-2.5,openrouter.gemini-2.5-flashopenrouter.deepseek-r1-free- Free access to DeepSeek R1
Pricing matches the underlying provider. See OpenRouter’s model pricing for details.
AWS Bedrock
Provider for Claude models via AWS infrastructure.Authentication
Bedrock uses AWS credentials. OpenCode detects credentials in this order:AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYenvironment variablesAWS_PROFILEorAWS_DEFAULT_PROFILEenvironment variables- EC2 instance profiles or ECS task roles
AWS region for Bedrock (e.g.,
us-east-1, us-west-2)Configuration
Supported models
| Model ID | Name | Cost (Input/Output per 1M tokens) |
|---|---|---|
bedrock.claude-3.7-sonnet | Claude 3.7 Sonnet | 15.00 |
Bedrock models automatically disable prompt caching as it’s not supported in the Bedrock API.
Azure OpenAI
Provider for OpenAI models via Azure infrastructure.Authentication
Azure OpenAI supports two authentication methods: API Key authentication:Your Azure OpenAI endpoint (e.g.,
https://your-resource.openai.azure.com)Your Azure OpenAI API key (optional when using Entra ID)
Azure OpenAI API version
AZURE_OPENAI_API_KEY is not provided, OpenCode uses Azure’s DefaultAzureCredential for authentication.
Configuration
Supported models
Azure OpenAI supports the same models as OpenAI. Prefix model IDs withazure.:
azure.gpt-4.1,azure.gpt-4.1-mini,azure.gpt-4.1-nanoazure.gpt-4o,azure.gpt-4o-miniazure.o1,azure.o1-mini,azure.o3,azure.o3-mini,azure.o4-mini
Pricing is similar to OpenAI’s standard pricing but may vary by region.
Google Cloud VertexAI
Provider for Gemini models via Google Cloud infrastructure.Authentication
VertexAI uses Google Cloud credentials. OpenCode detects credentials when these environment variables are set:Your Google Cloud project ID
Google Cloud region (e.g.,
us-central1)GOOGLE_CLOUD_PROJECTandGOOGLE_CLOUD_REGION(orGOOGLE_CLOUD_LOCATION)
Configuration
Supported models
| Model ID | Name | Context Window | Default Max Tokens | |----------|------|----------------|--------------------|---| |vertexai.gemini-2.5 | Gemini 2.5 Pro | 1,000,000 | 50,000 |
| vertexai.gemini-2.5-flash | Gemini 2.5 Flash | 1,000,000 | 50,000 |
GitHub Copilot
Provider for models via GitHub Copilot subscription. Includes access to multiple models at no additional cost.Authentication
GitHub Copilot authentication is auto-detected from:GITHUB_TOKENenvironment variable- GitHub Copilot CLI configuration:
~/.config/github-copilot/hosts.json(Linux/macOS)~/.config/github-copilot/apps.json(Linux/macOS)%LOCALAPPDATA%\github-copilot\(Windows)
Configuration
Supported models
All models are included with GitHub Copilot subscription at $0 cost:| Model ID | Name | Context Window | Default Max Tokens | Features |
|---|---|---|---|---|
copilot.gpt-4.1 | GPT 4.1 | 128,000 | 16,384 | Reasoning, attachments |
copilot.gpt-4o | GPT-4o | 128,000 | 16,384 | Attachments |
copilot.gpt-4o-mini | GPT-4o Mini | 128,000 | 4,096 | Attachments |
copilot.claude-3.7-sonnet | Claude 3.7 Sonnet | 200,000 | 16,384 | Attachments |
copilot.claude-sonnet-4 | Claude Sonnet 4 | 128,000 | 16,000 | Attachments |
copilot.o1 | o1 | 200,000 | 100,000 | Reasoning |
copilot.o3-mini | o3-mini | 200,000 | 100,000 | Reasoning |
copilot.o4-mini | o4-mini | 128,000 | 16,384 | Reasoning, attachments |
copilot.gemini-2.5-pro | Gemini 2.5 Pro | 128,000 | 64,000 | Attachments |
copilot.gemini-2.0-flash | Gemini 2.0 Flash | 1,000,000 | 8,192 | Attachments |
GitHub Copilot provides the best value with access to multiple providers at no additional cost beyond the subscription.
xAI
Provider for Grok models.Authentication
Your xAI API key
Configuration
Supported models
| Model ID | Name | Context Window | Default Max Tokens | Cost (Input/Output per 1M tokens) | |----------|------|----------------|--------------------|------------------------------------|---| |grok-3-beta | Grok 3 Beta | 131,072 | 20,000 | 15.00 |
| grok-3-mini-beta | Grok 3 Mini Beta | 131,072 | 20,000 | 0.50 |
| grok-3-fast-beta | Grok 3 Fast Beta | 131,072 | 20,000 | 25.00 |
| grok-3-mini-fast-beta | Grok 3 Mini Fast Beta | 131,072 | 20,000 | 4.00 |
Default model selection
OpenCode automatically selects default models based on available providers in this priority order:- GitHub Copilot →
copilot.gpt-4o - Anthropic →
claude-4-sonnet - OpenAI →
gpt-4.1(coder),gpt-4.1-mini(task/title) - Gemini →
gemini-2.5(coder),gemini-2.5-flash(task/title) - Groq →
qwen-qwq - OpenRouter →
openrouter.claude-3.7-sonnet - xAI →
grok-3-beta - AWS Bedrock →
bedrock.claude-3.7-sonnet - Azure OpenAI →
azure.gpt-4.1 - VertexAI →
vertexai.gemini-2.5
You can override defaults by explicitly configuring the
agents section in your configuration file.