- Add the API keys for the provider using the
/connectcommand. - Configure the provider in your OpenCode config.
Credentials
When you add a provider’s API keys with the/connect command, they are stored in ~/.local/share/opencode/auth.json.
Config
You can customize the providers through theprovider section in your OpenCode config.
Base URL
You can customize the base URL for any provider by setting thebaseURL option. This is useful when using proxy services or custom endpoints.
opencode.json
OpenCode Zen
OpenCode Zen is a list of models provided by the OpenCode team that have been tested and verified to work well with OpenCode.Run the /connect command
Run the
/connect command in the TUI, select opencode, and head to opencode.ai/auth.Provider Directory
Let’s look at some of the providers in detail. If you’d like to add a provider to the list, feel free to open a PR.Don’t see a provider here? Submit a PR.
Anthropic
Select auth method
Here you can select the Claude Pro/Max option and it’ll open your browser and ask you to authenticate.
Using your Claude Pro/Max subscription in OpenCode is not officially supported by Anthropic.
Using API keys
You can also select Create an API Key if you don’t have a Pro/Max subscription. It’ll also open your browser and ask you to login to Anthropic and give you a code you can paste in your terminal. Or if you already have an API key, you can select Manually enter API Key and paste it in your terminal.Amazon Bedrock
To use Amazon Bedrock with OpenCode:Request model access
Head over to the Model catalog in the Amazon Bedrock console and request access to the models you want.
Configure authentication
Choose one of the following methods:Or add them to your bash profile:Available options:
Environment Variables (Quick Start)
Set one of these environment variables while running opencode:~/.bash_profile
Configuration File (Recommended)
For project-specific or persistent configuration, useopencode.json:opencode.json
region- AWS region (e.g.,us-east-1,eu-west-1)profile- AWS named profile from~/.aws/credentialsendpoint- Custom endpoint URL for VPC endpoints (alias for genericbaseURLoption)
Authentication Methods
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY: Create an IAM user and generate access keys in the AWS ConsoleAWS_PROFILE: Use named profiles from~/.aws/credentials. First configure withaws configure --profile my-profileoraws sso loginAWS_BEARER_TOKEN_BEDROCK: Generate long-term API keys from the Amazon Bedrock consoleAWS_WEB_IDENTITY_TOKEN_FILE/AWS_ROLE_ARN: For EKS IRSA (IAM Roles for Service Accounts) or other Kubernetes environments with OIDC federation
Authentication Precedence
Amazon Bedrock uses the following authentication priority:- Bearer Token -
AWS_BEARER_TOKEN_BEDROCKenvironment variable or token from/connectcommand - AWS Credential Chain - Profile, access keys, shared credentials, IAM roles, Web Identity Tokens (EKS IRSA), instance metadata
When a bearer token is set (via
/connect or AWS_BEARER_TOKEN_BEDROCK), it takes precedence over all AWS credential methods including configured profiles.For custom inference profiles, use the model and provider name in the key and set the
id property to the arn. This ensures correct caching:opencode.json
OpenAI
We recommend signing up for ChatGPT Plus or Pro.Select auth method
Here you can select the ChatGPT Plus/Pro option and it’ll open your browser and ask you to authenticate.
Using API keys
If you already have an API key, you can select Manually enter API Key and paste it in your terminal.GitHub Copilot
To use your GitHub Copilot subscription with opencode:Some models might need a Pro+ subscription to use.
Authorize with GitHub
Navigate to github.com/login/device and enter the code.
Google Vertex AI
To use Google Vertex AI with OpenCode:Check model availability
Head over to the Model Garden in the Google Cloud Console and check the models available in your region.
You need to have a Google Cloud project with Vertex AI API enabled.
Set environment variables
Set the required environment variables:Or add them to your bash profile:
GOOGLE_CLOUD_PROJECT: Your Google Cloud project IDVERTEX_LOCATION(optional): The region for Vertex AI (defaults toglobal)- Authentication (choose one):
GOOGLE_APPLICATION_CREDENTIALS: Path to your service account JSON key file- Authenticate using gcloud CLI:
gcloud auth application-default login
~/.bash_profile
DeepSeek
Create API key
Head over to the DeepSeek console, create an account, and click Create new API key.
Local Models
Ollama
You can configure opencode to use local models through Ollama.opencode.json
ollamais the custom provider ID. This can be any string you want.npmspecifies the package to use for this provider. Here,@ai-sdk/openai-compatibleis used for any OpenAI-compatible API.nameis the display name for the provider in the UI.options.baseURLis the endpoint for the local server.modelsis a map of model IDs to their configurations. The model name will be displayed in the model selection list.
LM Studio
You can configure opencode to use local models through LM Studio.opencode.json
llama.cpp
You can configure opencode to use local models through llama.cpp’s llama-server utility.opencode.json
Custom Provider
To add any OpenAI-compatible provider that’s not listed in the/connect command:
Enter provider ID
Enter a unique ID for the provider.
Choose a memorable ID, you’ll use this in your config file.
Configure in opencode.json
Create or update your Here are the configuration options:
opencode.json file in your project directory:opencode.json
- npm: AI SDK package to use,
@ai-sdk/openai-compatiblefor OpenAI-compatible providers - name: Display name in UI
- models: Available models
- options.baseURL: API endpoint URL
- options.apiKey: Optionally set the API key, if not using auth
- options.headers: Optionally set custom headers
Example with Advanced Options
Here’s an example setting theapiKey, headers, and model limit options:
opencode.json
- apiKey: Set using
envvariable syntax - headers: Custom headers sent with each request
- limit.context: Maximum input tokens the model accepts
- limit.output: Maximum tokens the model can generate
limit fields allow OpenCode to understand how much context you have left. Standard providers pull these from models.dev automatically.
Troubleshooting
If you are having trouble with configuring a provider, check the following:Check auth setup
Run
opencode auth list to see if the credentials for the provider are added to your config.This doesn’t apply to providers like Amazon Bedrock, that rely on environment variables for their auth.For custom providers, verify config
Check the opencode config and:
- Make sure the provider ID used in the
/connectcommand matches the ID in your opencode config - The right npm package is used for the provider. For example, use
@ai-sdk/cerebrasfor Cerebras. And for all other OpenAI-compatible providers, use@ai-sdk/openai-compatible - Check correct API endpoint is used in the
options.baseURLfield