Skip to main content
This guide covers common issues you might encounter while using OpenCode and their solutions.

Installation issues

Symptoms: Running opencode returns “command not found”Solution:
1

Verify installation

Check if OpenCode is installed:
which opencode
2

Update PATH

If installed via Go:
# Add to ~/.bashrc or ~/.zshrc
export PATH="$PATH:$HOME/go/bin"
Then reload your shell:
source ~/.bashrc  # or ~/.zshrc
3

Reinstall if necessary

Try reinstalling OpenCode:
# Using Homebrew
brew reinstall opencode-ai/tap/opencode

# Using Go
go install github.com/opencode-ai/opencode@latest
Symptoms: Permission errors when installing or running OpenCodeSolution:
# If installed globally, ensure proper permissions
sudo chown -R $(whoami) /usr/local/bin/opencode

# Or install for current user only
go install github.com/opencode-ai/opencode@latest

Configuration issues

Symptoms: OpenCode doesn’t find your configuration fileSolution:OpenCode looks for configuration in these locations (in order):
  1. ./.opencode.json (current directory)
  2. $XDG_CONFIG_HOME/opencode/.opencode.json
  3. $HOME/.opencode.json
Create a config file in one of these locations:
# Option 1: Home directory
touch ~/.opencode.json

# Option 2: XDG config directory
mkdir -p ~/.config/opencode
touch ~/.config/opencode/.opencode.json
Symptoms: OpenCode fails to start with JSON parsing errorSolution:Validate your JSON:
# Use jq to validate and format
jq . ~/.opencode.json
Common JSON errors:
  • Missing commas between properties
  • Trailing commas (not allowed in JSON)
  • Unquoted keys or values
  • Unclosed brackets or braces
Example of valid JSON:
{
  "providers": {
    "anthropic": {
      "apiKey": "your-key",
      "disabled": false
    }
  }
}
Symptoms: Error message about unsupported or unavailable modelSolution:
1

Check model name

Ensure the model ID is correct. See the quickstart guide for supported models.
2

Verify provider is configured

Check that the provider for your model is properly configured:
# For Anthropic
echo $ANTHROPIC_API_KEY

# For OpenAI
echo $OPENAI_API_KEY
3

Check provider status

Ensure the provider isn’t disabled:
.opencode.json
{
  "providers": {
    "anthropic": {
      "disabled": false  // Should be false
    }
  }
}

Provider and API issues

Symptoms: Authentication errors with your API keySolution:
1

Verify API key is valid

  • Check for extra spaces or newlines
  • Ensure the key hasn’t expired
  • Verify the key has the correct permissions
2

Check environment variables

# List all OpenCode-related env vars
env | grep -E '(ANTHROPIC|OPENAI|GEMINI|GITHUB)'
3

Test API key directly

Test your API key with curl:
# Anthropic
curl https://api.anthropic.com/v1/messages \
  -H "x-api-key: $ANTHROPIC_API_KEY" \
  -H "content-type: application/json" \
  -d '{"model":"claude-4-sonnet-20250514","max_tokens":10,"messages":[{"role":"user","content":"Hi"}]}'

# OpenAI
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY"
Symptoms: Error messages about rate limits or too many requestsSolution:OpenCode automatically retries rate-limited requests with exponential backoff (up to 8 retries).If you continue to hit rate limits:
  • Wait: Rate limits typically reset after a few minutes
  • Upgrade: Consider upgrading your API tier
  • Reduce load: Use smaller models for simple tasks
  • Monitor usage: Check your API usage dashboard
OpenCode respects Retry-After headers and implements exponential backoff automatically.
Symptoms: Requests timeout or hang indefinitelySolution:
1

Check network connection

Verify you can reach the API:
ping api.anthropic.com
ping api.openai.com
2

Check firewall and proxy settings

  • Ensure your firewall allows HTTPS traffic
  • If behind a corporate proxy, configure proxy settings
  • Check VPN isn’t blocking API access
3

Verify API status

Check provider status pages:
Symptoms: Error about exceeding maximum context lengthSolution:
This error occurs when your conversation exceeds the model’s context window.
Options:
  1. Enable auto-compact (enabled by default):
    .opencode.json
    {
      "autoCompact": true
    }
    
    This automatically summarizes your conversation when approaching the limit.
  2. Manually compact the session using Ctrl+K → “Compact Session”
  3. Start a new session with Ctrl+N
  4. Use a model with a larger context window:
    • Claude 4 Sonnet: 200K tokens
    • Gemini 2.5: 1M tokens
    • GPT-4.1: 128K tokens

GitHub Copilot issues

Symptoms: OpenCode can’t find your GitHub Copilot tokenSolution:
1

Authenticate with GitHub CLI

gh auth login
2

Or set token explicitly

export GITHUB_TOKEN="ghp_your_token_here"
3

Verify token location

Check that the token file exists:
ls -la ~/.config/github-copilot/
See the GitHub Copilot guide for more details.
Symptoms: Error exchanging GitHub token for Copilot bearer tokenSolution:
  • Ensure Copilot chat is enabled in your GitHub settings
  • Verify your Copilot subscription is active
  • Check that your GitHub token has Copilot permissions
  • Try re-authenticating with the GitHub CLI

Self-hosted model issues

Symptoms: Connection refused to local model endpointSolution:
1

Verify server is running

# Check if the port is listening
netstat -an | grep :1234
# or
lsof -i :1234
2

Test endpoint

curl http://localhost:1234/v1/models
3

Check LOCAL_ENDPOINT variable

echo $LOCAL_ENDPOINT
# Should output: http://localhost:1234/v1
See the self-hosted models guide for more details.
Symptoms: Self-hosted model doesn’t use tools correctlySolution:Not all models support tool/function calling. Use models known to work well:
  • Llama 3.3 70B Instruct
  • Qwen 2.5 Coder
  • Granite 3.1 (IBM)
  • Mistral Large
Ensure your inference server properly implements the OpenAI tools API.

Shell and command execution issues

Symptoms: Tools like npm, python, or custom scripts aren’t foundSolution:Use a login shell to load your PATH:
.opencode.json
{
  "shell": {
    "path": "/bin/bash",
    "args": ["-l"]
  }
}
See the shell configuration guide for more details.
Symptoms: Environment variables set in your profile aren’t availableSolution:Ensure variables are exported in your profile:
~/.bash_profile
export NODE_PATH="/usr/local/lib/node_modules"
export PATH="$HOME/.local/bin:$PATH"
And use a login shell configuration.
Symptoms: Long-running commands are killedSolution:OpenCode uses a default timeout of 2 minutes for commands. For long-running operations:
  • Break down the operation into smaller steps
  • Ask OpenCode to run the command with explicit timeout handling
  • Consider running the command outside of OpenCode for very long operations

Database and session issues

Symptoms: SQLite database lock errorsSolution:This usually means another OpenCode instance is running.
1

Close other instances

Check for running OpenCode processes:
ps aux | grep opencode
Kill any orphaned processes:
killall opencode
2

Remove lock file

If the issue persists:
rm ~/.opencode/.db-lock
Symptoms: Previous conversations don’t appearSolution:Check the database file exists:
ls -la ~/.opencode/opencode.db
If the database is corrupted, you may need to remove it (this will delete all sessions):
# Backup first
cp ~/.opencode/opencode.db ~/.opencode/opencode.db.backup

# Remove corrupted database
rm ~/.opencode/opencode.db
Symptoms: Session switching (Ctrl+A) doesn’t workSolution:
  • Ensure you have multiple sessions created
  • Try creating a new session with Ctrl+N first
  • Check if the database is accessible
  • Restart OpenCode

Performance issues

Symptoms: OpenCode takes a long time to startSolution:
1

Check shell startup time

Your shell profile might be slow to load:
time bash -l -c 'echo loaded'
2

Optimize shell profile

  • Remove unnecessary initialization
  • Lazy-load plugins and tools
  • Profile your shell config to find slow parts
3

Check database size

Large databases can slow startup:
ls -lh ~/.opencode/opencode.db
Consider archiving old sessions.
Symptoms: OpenCode uses excessive memorySolution:
  • Use models with smaller context windows
  • Compact long sessions
  • Restart OpenCode periodically
  • Check for memory leaks (report as a bug if persistent)
Symptoms: Responses take a long time to generateSolution:
  • Use faster models (e.g., GPT-4o Mini, Claude Haiku)
  • Reduce maxTokens for faster responses
  • Check your network connection
  • Consider using a different provider or region

LSP integration issues

Symptoms: Language server features don’t workSolution:
1

Verify LSP server is installed

which gopls  # for Go
which typescript-language-server  # for TypeScript
2

Check LSP configuration

.opencode.json
{
  "lsp": {
    "go": {
      "disabled": false,
      "command": "gopls"
    }
  }
}
3

Enable LSP debug logging

.opencode.json
{
  "debugLSP": true
}
Then check logs with Ctrl+L
Symptoms: OpenCode doesn’t show code errors or warningsSolution:
  • Ensure the LSP server supports diagnostics
  • Check that files are saved (LSP typically works on saved files)
  • Verify the language server is properly initialized
  • Try restarting OpenCode

Getting help

If you’re still experiencing issues:

Enable debug mode

Run OpenCode with debug logging:
opencode -d
View logs with Ctrl+L

Check GitHub issues

Search for similar issues:github.com/opencode-ai/opencode/issues

Report a bug

Create a new issue with:
  • OpenCode version
  • Operating system
  • Error messages
  • Debug logs

Join the community

Get help from other users and contributors in discussions.

Useful debug commands

# Check OpenCode version
opencode --version

# Run with debug output
opencode -d

# Check configuration
cat ~/.opencode.json | jq .

# Verify environment variables
env | grep -E '(ANTHROPIC|OPENAI|GEMINI|GITHUB|AWS|AZURE)'

# Test API connectivity
curl https://api.anthropic.com/v1/messages -I
curl https://api.openai.com/v1/models -I

# Check database
ls -lh ~/.opencode/opencode.db
sqlite3 ~/.opencode/opencode.db "SELECT COUNT(*) FROM sessions;"

# View recent logs
tail -f ~/.opencode/debug.log  # if OPENCODE_DEV_DEBUG=true

Build docs developers (and LLMs) love