Skip to main content
Codex CLI is a powerful terminal-based AI coding assistant. Configure it to use Codex-LB for account pooling and usage tracking.

Endpoint

http://127.0.0.1:2455/backend-api/codex
Codex CLI uses the /backend-api/codex endpoint, not /v1. This endpoint supports the Codex wire API format with /responses/compact support.

Configuration

Edit your Codex CLI config file at ~/.codex/config.toml:
Use this configuration when API key authentication is disabled (default):
~/.codex/config.toml
model = "gpt-5.3-codex"
model_reasoning_effort = "xhigh"
model_provider = "codex-lb"

[model_providers.codex-lb]
name = "OpenAI"  # required — enables remote /responses/compact
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
The name = "OpenAI" field is required for proper wire API detection.

Configuration Fields

FieldDescriptionRequired
modelModel ID to use (e.g., gpt-5.3-codex)Yes
model_reasoning_effortReasoning effort level: low, medium, high, xhighNo
model_providerProvider identifier (use codex-lb)Yes
nameMust be "OpenAI" for wire API compatibilityYes
base_urlCodex-LB backend API endpointYes
wire_apiMust be "responses" for Codex wire formatYes
env_keyEnvironment variable name for API keyOnly if auth enabled

Migrating from Direct OpenAI

If you were previously using OpenAI directly, old sessions won’t appear in codex resume because they’re tagged with a different model_provider. Re-tag your existing sessions to make them appear:
# Update all JSONL session files
find ~/.codex/sessions -name '*.jsonl' \
  -exec sed -i '' 's/"model_provider":"openai"/"model_provider":"codex-lb"/g' {} +
Make a backup before modifying session files:
cp -r ~/.codex/sessions ~/.codex/sessions.backup
cp ~/.codex/state_5.sqlite ~/.codex/state_5.sqlite.backup

Verify Configuration

Test your setup:
# Start a new session
codex

# At the prompt, try a simple query
> Hello, can you help me?
If configured correctly, you should see:
  • Connection to Codex-LB successful
  • Model responses from your pooled accounts
  • Usage tracked in the Codex-LB dashboard

Troubleshooting

Ensure Codex-LB is running:
curl http://127.0.0.1:2455/backend-api/codex/v1/models
If using Docker:
docker ps | grep codex-lb
docker logs codex-lb
This means API key auth is enabled but your key is missing or invalid:
  1. Verify CODEX_LB_API_KEY is set:
    echo $CODEX_LB_API_KEY
    
  2. Check the key is valid in the dashboard
  3. Ensure env_key is configured in config.toml
The requested model isn’t available:
  1. Check available models:
    curl http://127.0.0.1:2455/backend-api/codex/v1/models
    
  2. Verify at least one account supports the model
  3. Update your model field in config.toml to an available model
This happens when model_provider doesn’t match. See Migrating from Direct OpenAI above.
If you see wire format errors:
  1. Ensure name = "OpenAI" is set (enables wire API detection)
  2. Verify wire_api = "responses" is configured
  3. Check Codex CLI version supports the wire API format

IDE Extensions

Codex IDE extensions (VS Code, JetBrains, etc.) typically read from the same ~/.codex/config.toml file. The configuration above should work for both CLI and IDE usage. If your IDE extension uses a separate config:
  1. Locate the extension’s config file (check extension settings)
  2. Apply the same model_provider configuration
  3. Restart your IDE

Advanced Configuration

Multiple Providers

You can configure multiple providers and switch between them:
model_provider = "codex-lb"  # default provider

[model_providers.codex-lb]
name = "OpenAI"
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"

[model_providers.openai-direct]
name = "OpenAI"
base_url = "https://api.openai.com"
env_key = "OPENAI_API_KEY"
Switch providers with:
codex --provider openai-direct

Remote Access

If Codex-LB is running on a different machine:
[model_providers.codex-lb]
name = "OpenAI"
base_url = "http://your-server:2455/backend-api/codex"
wire_api = "responses"
env_key = "CODEX_LB_API_KEY"
When exposing Codex-LB remotely:
  • Always enable API key authentication
  • Use HTTPS with a reverse proxy (nginx, Caddy)
  • Configure firewall rules to restrict access
  • See Production Deployment

Next Steps

API Keys

Create and manage API keys for authentication

Rate Limiting

Configure rate limits per key or account

Codex API

Explore the Codex backend API endpoints

Dashboard

Monitor usage and costs in real-time

Build docs developers (and LLMs) love