Skip to main content
OpenClaw is an advanced agent framework. Configure it to use Codex-LB for account pooling and centralized usage tracking.

Endpoint

http://127.0.0.1:2455/v1
OpenClaw uses the standard OpenAI-compatible /v1 endpoint.

Configuration

Edit your OpenClaw config file at ~/.openclaw/openclaw.json:
Use this configuration when API key authentication is disabled (default):
~/.openclaw/openclaw.json
{
  "agents": {
    "defaults": {
      "model": { "primary": "codex-lb/gpt-5.3-codex" }
    }
  },
  "models": {
    "mode": "merge",
    "providers": {
      "codex-lb": {
        "baseUrl": "http://127.0.0.1:2455/v1",
        "apiKey": "dummy",   // any value works when auth is disabled
        "api": "openai-completions",
        "models": [
          { "id": "gpt-5.3-codex", "name": "GPT-5.3 Codex" },
          { "id": "gpt-5.3-codex-spark", "name": "GPT-5.3 Codex Spark" }
        ]
      }
    }
  }
}
When API key auth is disabled, OpenClaw still requires an apiKey field. Any string value works (e.g., "dummy").

Configuration Fields

FieldDescriptionRequired
baseUrlCodex-LB /v1 endpointYes
apiKeyAPI key or ${ENV_VAR} or "dummy"Yes
apiMust be "openai-completions"Yes
modelsArray of model configurationsYes
mode"merge" to combine with other providersNo
primaryDefault model IDYes

Model Configuration

Define all models available in your Codex-LB instance:
"models": [
  {
    "id": "gpt-5.3-codex",
    "name": "GPT-5.3 Codex"
  },
  {
    "id": "gpt-5.3-codex-spark",
    "name": "GPT-5.3 Codex Spark"
  },
  {
    "id": "gpt-4o",
    "name": "GPT-4o"
  },
  {
    "id": "gpt-4o-mini",
    "name": "GPT-4o Mini"
  }
]
You can reference these models in agent configurations:
"agents": {
  "defaults": {
    "model": {
      "primary": "codex-lb/gpt-5.3-codex",
      "fallback": "codex-lb/gpt-4o"
    }
  },
  "researcher": {
    "model": { "primary": "codex-lb/gpt-5.3-codex-spark" }
  },
  "writer": {
    "model": { "primary": "codex-lb/gpt-4o" }
  }
}

Provider Modes

OpenClaw supports different provider modes:

Verify Configuration

Test your setup:
# Start OpenClaw
openclaw

# Run a test agent task
openclaw run --agent defaults "Write a hello world function"
Verify in the Codex-LB dashboard:
  1. Open http://localhost:2455
  2. Check Dashboard for usage metrics
  3. Confirm requests are being logged under the correct API key

Troubleshooting

Ensure Codex-LB is running:
curl http://127.0.0.1:2455/v1/models
If using Docker:
docker ps | grep codex-lb
docker logs codex-lb
API key auth is enabled but your key is missing or invalid:
  1. Verify the environment variable is set:
    echo $CODEX_LB_API_KEY
    
  2. Check the key is valid in the dashboard
  3. Ensure the apiKey field uses ${CODEX_LB_API_KEY} syntax
  4. Restart OpenClaw after setting the environment variable
If you see this when auth is disabled:
  1. Ensure apiKey is set to any string (e.g., "dummy")
  2. OpenClaw requires the field even when auth is disabled
If codex-lb doesn’t show up:
  1. Verify JSON syntax is correct (no trailing commas)
  2. Check OpenClaw logs for config parsing errors:
    openclaw --verbose
    
  3. Ensure mode is set correctly (merge or replace)
The requested model isn’t available:
  1. Check available models:
    curl http://127.0.0.1:2455/v1/models
    
  2. Verify the model ID matches exactly (case-sensitive)
  3. Ensure at least one account supports the model
  4. Update the models array to include the correct ID
If agents aren’t using Codex-LB:
  1. Check the primary model is prefixed with codex-lb/
  2. Verify agent-specific configs don’t override with other providers
  3. Use openclaw config show to debug resolved configuration

Advanced Configuration

Per-Agent Models

Configure different models for different agent types:
"agents": {
  "defaults": {
    "model": { "primary": "codex-lb/gpt-4o" }
  },
  "coding": {
    "model": {
      "primary": "codex-lb/gpt-5.3-codex",
      "fallback": "codex-lb/gpt-4o"
    }
  },
  "research": {
    "model": { "primary": "codex-lb/gpt-5.3-codex-spark" }
  },
  "chat": {
    "model": { "primary": "codex-lb/gpt-4o-mini" }
  }
}

Environment-Specific Configs

Use different configs for development vs. production:
# Development (local Codex-LB)
export CODEX_LB_BASE_URL="http://127.0.0.1:2455/v1"
export CODEX_LB_API_KEY="sk-clb-dev-..."

# Production (remote Codex-LB)
export CODEX_LB_BASE_URL="https://codex-lb.company.com/v1"
export CODEX_LB_API_KEY="sk-clb-prod-..."
Reference in config:
"baseUrl": "${CODEX_LB_BASE_URL}",
"apiKey": "${CODEX_LB_API_KEY}"

Remote Access

If Codex-LB is running on a different machine:
"codex-lb": {
  "baseUrl": "https://your-server.com/v1",
  "apiKey": "${CODEX_LB_API_KEY}",
  "api": "openai-completions",
  "models": [ /* ... */ ]
}
When exposing Codex-LB remotely:
  • Always enable API key authentication
  • Use HTTPS with a reverse proxy (nginx, Caddy)
  • Configure firewall rules to restrict access
  • See Production Deployment

Next Steps

API Keys

Create and manage API keys for authentication

Rate Limiting

Configure rate limits per key or account

Chat Completions API

Explore the /v1/chat/completions endpoint

Usage Tracking

Monitor OpenClaw usage in the dashboard

Build docs developers (and LLMs) love