Skip to main content
OpenCode is an advanced AI coding assistant. Configure it to use Codex-LB for account pooling and centralized usage tracking.

Endpoint

http://127.0.0.1:2455/v1
OpenCode uses the standard OpenAI-compatible /v1 endpoint.

Configuration

Edit your OpenCode config file at ~/.config/opencode/opencode.json:
Use this configuration when API key authentication is disabled (default):
~/.config/opencode/opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "codex-lb": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "codex-lb",
      "options": {
        "baseURL": "http://127.0.0.1:2455/v1"
      },
      "models": {
        "gpt-5.3-codex": {
          "name": "GPT-5.3 Codex",
          "reasoning": true,
          "interleaved": { "field": "reasoning_details" },
          "options": { "reasoningEffort": "medium" }
        }
      }
    }
  },
  "model": "codex-lb/gpt-5.3-codex"
}

Configuration Fields

FieldDescriptionRequired
npmNPM package for provider adapterYes
nameProvider display nameYes
baseURLCodex-LB /v1 endpointYes
apiKeyAPI key or {env:VAR_NAME}Only if auth enabled
modelsModel configurationsYes
reasoningEnable reasoning modeFor reasoning models
interleavedReasoning output fieldFor reasoning models
reasoningEffortEffort level: low, medium, highFor reasoning models

Multiple Models

You can configure multiple models from your Codex-LB instance:
"models": {
  "gpt-5.3-codex": {
    "name": "GPT-5.3 Codex",
    "reasoning": true,
    "interleaved": { "field": "reasoning_details" },
    "options": { "reasoningEffort": "medium" }
  },
  "gpt-5.3-codex-spark": {
    "name": "GPT-5.3 Codex Spark",
    "reasoning": true,
    "interleaved": { "field": "reasoning_details" },
    "options": { "reasoningEffort": "high" }
  },
  "gpt-4o": {
    "name": "GPT-4o"
  }
}
Switch models in OpenCode using the model selector.

Preserving Default Providers

The configuration above adds codex-lb alongside OpenCode’s default providers (OpenAI, Anthropic, etc.).
If you use enabled_providers, you must explicitly list every provider you want to keep:
"enabled_providers": ["codex-lb", "openai", "anthropic"]
Providers not listed will be hidden.
To only use Codex-LB and disable other providers:
{
  "enabled_providers": ["codex-lb"],
  "provider": { /* codex-lb config */ },
  "model": "codex-lb/gpt-5.3-codex"
}

Verify Configuration

Test your setup:
# Start OpenCode
opencode

# Check that codex-lb appears in the provider list
# Try a simple query
Verify in the Codex-LB dashboard:
  1. Open http://localhost:2455
  2. Check Dashboard for usage metrics
  3. Confirm requests are being logged

Troubleshooting

The @ai-sdk/openai-compatible package may be missing:
# Install the OpenAI-compatible adapter
npm install -g @ai-sdk/openai-compatible
Or if using OpenCode’s built-in package manager:
opencode install @ai-sdk/openai-compatible
Ensure Codex-LB is running:
curl http://127.0.0.1:2455/v1/models
If using Docker:
docker ps | grep codex-lb
docker logs codex-lb
API key auth is enabled but your key is missing or invalid:
  1. Verify the environment variable is set:
    echo $CODEX_LB_API_KEY
    
  2. Check the key is valid in the dashboard
  3. Ensure the apiKey field uses {env:CODEX_LB_API_KEY} syntax
  4. Restart OpenCode after setting the environment variable
If codex-lb doesn’t show up:
  1. Verify JSON syntax is correct (no trailing commas)
  2. Check OpenCode logs for config parsing errors
  3. If using enabled_providers, ensure codex-lb is listed
  4. Restart OpenCode
The requested model isn’t available:
  1. Check available models:
    curl http://127.0.0.1:2455/v1/models
    
  2. Verify at least one account supports the model
  3. Update the models config to match available models

Advanced Configuration

Reasoning Effort Levels

For models with reasoning capabilities, configure the effort level:
"options": {
  "reasoningEffort": "low"     // fastest, least thorough
  "reasoningEffort": "medium"  // balanced (default)
  "reasoningEffort": "high"    // slower, more thorough
}

Remote Access

If Codex-LB is running on a different machine:
"options": {
  "baseURL": "https://your-server.com/v1",
  "apiKey": "{env:CODEX_LB_API_KEY}"
}
When exposing Codex-LB remotely:
  • Always enable API key authentication
  • Use HTTPS with a reverse proxy
  • Configure firewall rules
  • See Production Deployment

Custom Headers

Add custom headers for advanced use cases:
"options": {
  "baseURL": "http://127.0.0.1:2455/v1",
  "headers": {
    "X-Custom-Header": "value"
  }
}

VS Code Extension

If using OpenCode’s VS Code extension, it reads from the same ~/.config/opencode/opencode.json file. The configuration above should work for both CLI and VS Code usage. After updating the config:
  1. Reload VS Code window (Cmd/Ctrl + Shift + P → “Reload Window”)
  2. Check the OpenCode output panel for any errors
  3. Verify codex-lb appears in the model selector

Next Steps

API Keys

Create and manage API keys for authentication

Model Routing

Configure intelligent model routing

Chat Completions API

Explore the /v1/chat/completions endpoint

Usage Tracking

Monitor OpenCode usage in the dashboard

Build docs developers (and LLMs) love