Skip to main content
OpenCode supports multiple AI providers, each with their own authentication methods and model offerings. This page documents all supported providers and their configuration requirements.

Supported providers

OpenCode supports the following AI providers (in order of automatic detection priority):
  1. GitHub Copilot - Free with GitHub Copilot subscription
  2. Anthropic - Claude models
  3. OpenAI - GPT models
  4. Google Gemini - Gemini models
  5. Groq - Fast inference for open-source models
  6. OpenRouter - Access to multiple models through one API
  7. AWS Bedrock - Claude models via AWS
  8. Azure OpenAI - OpenAI models via Azure
  9. Google Cloud VertexAI - Gemini models via Google Cloud
  10. xAI - Grok models

Anthropic

Provider for Claude models with industry-leading performance.

Authentication

ANTHROPIC_API_KEY
string
required
Your Anthropic API key. Get one at console.anthropic.com

Configuration

{
  "providers": {
    "anthropic": {
      "apiKey": "sk-ant-api03-...",
      "disabled": false
    }
  }
}

Supported models

Model IDNameContext WindowDefault Max TokensCost (Input/Output per 1M tokens)Features
claude-4-sonnetClaude 4 Sonnet200,00050,0003.00/3.00 / 15.00Extended thinking, attachments
claude-4-opusClaude 4 Opus200,0004,09615.00/15.00 / 75.00Attachments
claude-3.7-sonnetClaude 3.7 Sonnet200,00050,0003.00/3.00 / 15.00Extended thinking, attachments
claude-3.5-sonnetClaude 3.5 Sonnet200,0005,0003.00/3.00 / 15.00Attachments
claude-3.5-haikuClaude 3.5 Haiku200,0004,0960.80/0.80 / 4.00Attachments
claude-3-opusClaude 3 Opus200,0004,09615.00/15.00 / 75.00Attachments
claude-3-haikuClaude 3 Haiku200,0004,0960.25/0.25 / 1.25Attachments
Extended thinking models support the reasoningEffort parameter for deeper analysis.

OpenAI

Provider for GPT models including the reasoning-capable o-series.

Authentication

OPENAI_API_KEY
string
required
Your OpenAI API key. Get one at platform.openai.com

Configuration

{
  "providers": {
    "openai": {
      "apiKey": "sk-...",
      "disabled": false
    }
  }
}

Supported models

Model IDNameContext WindowDefault Max TokensCost (Input/Output per 1M tokens)Features
gpt-4.1GPT 4.11,047,57620,0002.00/2.00 / 8.00Attachments, prompt caching
gpt-4.1-miniGPT 4.1 Mini200,00020,0000.40/0.40 / 1.60Attachments, prompt caching
gpt-4.1-nanoGPT 4.1 Nano1,047,57620,0000.10/0.10 / 0.40Attachments, prompt caching
gpt-4.5-previewGPT 4.5 Preview128,00015,00075.00/75.00 / 150.00Attachments, prompt caching
gpt-4oGPT-4o128,0004,0962.50/2.50 / 10.00Attachments, prompt caching
gpt-4o-miniGPT-4o Mini128,000-0.15/0.15 / 0.60Attachments, prompt caching
o1o1200,00050,00015.00/15.00 / 60.00Reasoning, attachments, prompt caching
o1-proo1 Pro200,00050,000150.00/150.00 / 600.00Reasoning, attachments
o1-minio1 Mini128,00050,0001.10/1.10 / 4.40Reasoning, attachments, prompt caching
o3o3200,000-10.00/10.00 / 40.00Reasoning, attachments, prompt caching
o3-minio3 Mini200,00050,0001.10/1.10 / 4.40Reasoning, prompt caching
o4-minio4 Mini128,00050,0001.10/1.10 / 4.40Reasoning, attachments, prompt caching
Reasoning models (o-series) support the reasoningEffort parameter: low, medium, or high.

Google Gemini

Provider for Gemini models with large context windows.

Authentication

GEMINI_API_KEY
string
required
Your Google AI Studio API key. Get one at aistudio.google.com

Configuration

{
  "providers": {
    "gemini": {
      "apiKey": "AI...",
      "disabled": false
    }
  }
}

Supported models

Model IDNameContext WindowDefault Max TokensCost (Input/Output per 1M tokens)Features
gemini-2.5Gemini 2.5 Pro1,000,00050,0001.25/1.25 / 10.00Attachments
gemini-2.5-flashGemini 2.5 Flash1,000,00050,0000.15/0.15 / 0.60Attachments
gemini-2.0-flashGemini 2.0 Flash1,000,0006,0000.10/0.10 / 0.40Attachments
gemini-2.0-flash-liteGemini 2.0 Flash Lite1,000,0006,0000.05/0.05 / 0.30Attachments

Groq

Provider for fast inference with open-source models.

Authentication

GROQ_API_KEY
string
required
Your Groq API key. Get one at console.groq.com

Configuration

{
  "providers": {
    "groq": {
      "apiKey": "gsk_...",
      "disabled": false
    }
  }
}

Supported models

Model IDNameContext WindowCost (Input/Output per 1M tokens)
qwen-qwqQwen QwQ 32B128,0000.29/0.29 / 0.39
llama-3.3-70b-versatileLlama 3.3 70B Versatile128,0000.59/0.59 / 0.79
meta-llama/llama-4-scout-17b-16e-instructLlama 4 Scout128,0000.11/0.11 / 0.34
meta-llama/llama-4-maverick-17b-128e-instructLlama 4 Maverick128,0000.20/0.20 / 0.20
deepseek-r1-distill-llama-70bDeepSeek R1 Distill128,0000.75/0.75 / 0.99

OpenRouter

Provider for accessing multiple models through a single API.

Authentication

OPENROUTER_API_KEY
string
required
Your OpenRouter API key. Get one at openrouter.ai/keys

Configuration

{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-...",
      "disabled": false
    }
  }
}

Supported models

OpenRouter provides access to models from multiple providers. Prefix model IDs with openrouter.:
  • openrouter.gpt-4.1, openrouter.gpt-4.1-mini, openrouter.gpt-4.1-nano
  • openrouter.o1, openrouter.o1-mini, openrouter.o3, openrouter.o3-mini, openrouter.o4-mini
  • openrouter.claude-3.7-sonnet, openrouter.claude-3.5-sonnet, openrouter.claude-3.5-haiku
  • openrouter.gemini-2.5, openrouter.gemini-2.5-flash
  • openrouter.deepseek-r1-free - Free access to DeepSeek R1
Pricing matches the underlying provider. See OpenRouter’s model pricing for details.

AWS Bedrock

Provider for Claude models via AWS infrastructure.

Authentication

Bedrock uses AWS credentials. OpenCode detects credentials in this order:
  1. AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables
  2. AWS_PROFILE or AWS_DEFAULT_PROFILE environment variables
  3. EC2 instance profiles or ECS task roles
AWS_REGION
string
default:"us-east-1"
AWS region for Bedrock (e.g., us-east-1, us-west-2)

Configuration

{
  "providers": {
    "bedrock": {
      "disabled": false
    }
  }
}

Supported models

Model IDNameCost (Input/Output per 1M tokens)
bedrock.claude-3.7-sonnetClaude 3.7 Sonnet3.00/3.00 / 15.00
Bedrock models automatically disable prompt caching as it’s not supported in the Bedrock API.

Azure OpenAI

Provider for OpenAI models via Azure infrastructure.

Authentication

Azure OpenAI supports two authentication methods: API Key authentication:
AZURE_OPENAI_ENDPOINT
string
required
Your Azure OpenAI endpoint (e.g., https://your-resource.openai.azure.com)
AZURE_OPENAI_API_KEY
string
Your Azure OpenAI API key (optional when using Entra ID)
AZURE_OPENAI_API_VERSION
string
default:"2025-04-01-preview"
Azure OpenAI API version
Entra ID authentication: If AZURE_OPENAI_API_KEY is not provided, OpenCode uses Azure’s DefaultAzureCredential for authentication.

Configuration

{
  "providers": {
    "azure": {
      "apiKey": "your-key",
      "disabled": false
    }
  }
}

Supported models

Azure OpenAI supports the same models as OpenAI. Prefix model IDs with azure.:
  • azure.gpt-4.1, azure.gpt-4.1-mini, azure.gpt-4.1-nano
  • azure.gpt-4o, azure.gpt-4o-mini
  • azure.o1, azure.o1-mini, azure.o3, azure.o3-mini, azure.o4-mini
Pricing is similar to OpenAI’s standard pricing but may vary by region.

Google Cloud VertexAI

Provider for Gemini models via Google Cloud infrastructure.

Authentication

VertexAI uses Google Cloud credentials. OpenCode detects credentials when these environment variables are set:
VERTEXAI_PROJECT
string
required
Your Google Cloud project ID
VERTEXAI_LOCATION
string
required
Google Cloud region (e.g., us-central1)
Alternatively:
  • GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION (or GOOGLE_CLOUD_LOCATION)

Configuration

{
  "providers": {
    "vertexai": {
      "disabled": false
    }
  }
}

Supported models

| Model ID | Name | Context Window | Default Max Tokens | |----------|------|----------------|--------------------|---| | vertexai.gemini-2.5 | Gemini 2.5 Pro | 1,000,000 | 50,000 | | vertexai.gemini-2.5-flash | Gemini 2.5 Flash | 1,000,000 | 50,000 |

GitHub Copilot

Provider for models via GitHub Copilot subscription. Includes access to multiple models at no additional cost.

Authentication

GitHub Copilot authentication is auto-detected from:
  1. GITHUB_TOKEN environment variable
  2. GitHub Copilot CLI configuration:
    • ~/.config/github-copilot/hosts.json (Linux/macOS)
    • ~/.config/github-copilot/apps.json (Linux/macOS)
    • %LOCALAPPDATA%\github-copilot\ (Windows)

Configuration

{
  "providers": {
    "copilot": {
      "disabled": false
    }
  }
}

Supported models

All models are included with GitHub Copilot subscription at $0 cost:
Model IDNameContext WindowDefault Max TokensFeatures
copilot.gpt-4.1GPT 4.1128,00016,384Reasoning, attachments
copilot.gpt-4oGPT-4o128,00016,384Attachments
copilot.gpt-4o-miniGPT-4o Mini128,0004,096Attachments
copilot.claude-3.7-sonnetClaude 3.7 Sonnet200,00016,384Attachments
copilot.claude-sonnet-4Claude Sonnet 4128,00016,000Attachments
copilot.o1o1200,000100,000Reasoning
copilot.o3-minio3-mini200,000100,000Reasoning
copilot.o4-minio4-mini128,00016,384Reasoning, attachments
copilot.gemini-2.5-proGemini 2.5 Pro128,00064,000Attachments
copilot.gemini-2.0-flashGemini 2.0 Flash1,000,0008,192Attachments
GitHub Copilot provides the best value with access to multiple providers at no additional cost beyond the subscription.

xAI

Provider for Grok models.

Authentication

XAI_API_KEY
string
required
Your xAI API key

Configuration

{
  "providers": {
    "xai": {
      "apiKey": "xai-...",
      "disabled": false
    }
  }
}

Supported models

| Model ID | Name | Context Window | Default Max Tokens | Cost (Input/Output per 1M tokens) | |----------|------|----------------|--------------------|------------------------------------|---| | grok-3-beta | Grok 3 Beta | 131,072 | 20,000 | 3.00/3.00 / 15.00 | | grok-3-mini-beta | Grok 3 Mini Beta | 131,072 | 20,000 | 0.30/0.30 / 0.50 | | grok-3-fast-beta | Grok 3 Fast Beta | 131,072 | 20,000 | 5.00/5.00 / 25.00 | | grok-3-mini-fast-beta | Grok 3 Mini Fast Beta | 131,072 | 20,000 | 0.60/0.60 / 4.00 |

Default model selection

OpenCode automatically selects default models based on available providers in this priority order:
  1. GitHub Copilotcopilot.gpt-4o
  2. Anthropicclaude-4-sonnet
  3. OpenAIgpt-4.1 (coder), gpt-4.1-mini (task/title)
  4. Geminigemini-2.5 (coder), gemini-2.5-flash (task/title)
  5. Groqqwen-qwq
  6. OpenRouteropenrouter.claude-3.7-sonnet
  7. xAIgrok-3-beta
  8. AWS Bedrockbedrock.claude-3.7-sonnet
  9. Azure OpenAIazure.gpt-4.1
  10. VertexAIvertexai.gemini-2.5
You can override defaults by explicitly configuring the agents section in your configuration file.

Build docs developers (and LLMs) love