Endpoint
OpenCode uses the standard OpenAI-compatible
/v1 endpoint.Configuration
Edit your OpenCode config file at~/.config/opencode/opencode.json:
- Without API Key Auth
- With API Key Auth
Use this configuration when API key authentication is disabled (default):
~/.config/opencode/opencode.json
Configuration Fields
| Field | Description | Required |
|---|---|---|
npm | NPM package for provider adapter | Yes |
name | Provider display name | Yes |
baseURL | Codex-LB /v1 endpoint | Yes |
apiKey | API key or {env:VAR_NAME} | Only if auth enabled |
models | Model configurations | Yes |
reasoning | Enable reasoning mode | For reasoning models |
interleaved | Reasoning output field | For reasoning models |
reasoningEffort | Effort level: low, medium, high | For reasoning models |
Multiple Models
You can configure multiple models from your Codex-LB instance:Preserving Default Providers
The configuration above addscodex-lb alongside OpenCode’s default providers (OpenAI, Anthropic, etc.).
If you use Providers not listed will be hidden.
enabled_providers, you must explicitly list every provider you want to keep:Verify Configuration
Test your setup:- Open http://localhost:2455
- Check Dashboard for usage metrics
- Confirm requests are being logged
Troubleshooting
Error: Failed to load provider
Error: Failed to load provider
The Or if using OpenCode’s built-in package manager:
@ai-sdk/openai-compatible package may be missing:Error: Connection refused
Error: Connection refused
Ensure Codex-LB is running:If using Docker:
Error: 401 Unauthorized
Error: 401 Unauthorized
Provider doesn't appear in selector
Provider doesn't appear in selector
If
codex-lb doesn’t show up:- Verify JSON syntax is correct (no trailing commas)
- Check OpenCode logs for config parsing errors
- If using
enabled_providers, ensurecodex-lbis listed - Restart OpenCode
Model not found
Model not found
The requested model isn’t available:
-
Check available models:
- Verify at least one account supports the model
-
Update the
modelsconfig to match available models
Advanced Configuration
Reasoning Effort Levels
For models with reasoning capabilities, configure the effort level:Remote Access
If Codex-LB is running on a different machine:Custom Headers
Add custom headers for advanced use cases:VS Code Extension
If using OpenCode’s VS Code extension, it reads from the same~/.config/opencode/opencode.json file. The configuration above should work for both CLI and VS Code usage.
After updating the config:
- Reload VS Code window (
Cmd/Ctrl + Shift + P→ “Reload Window”) - Check the OpenCode output panel for any errors
- Verify
codex-lbappears in the model selector
Next Steps
API Keys
Create and manage API keys for authentication
Model Routing
Configure intelligent model routing
Chat Completions API
Explore the
/v1/chat/completions endpointUsage Tracking
Monitor OpenCode usage in the dashboard