Supported Endpoints
Codex-LB exposes two main endpoint types:OpenAI v1
Standard OpenAI-compatible
/v1 endpointshttp://127.0.0.1:2455/v1Codex Backend API
Codex-specific
/backend-api/codex endpointshttp://127.0.0.1:2455/backend-api/codexSupported Clients
Codex-LB works seamlessly with these popular clients:| Client | Endpoint | Documentation |
|---|---|---|
| Codex CLI | http://127.0.0.1:2455/backend-api/codex | Setup Guide |
| OpenCode | http://127.0.0.1:2455/v1 | Setup Guide |
| OpenClaw | http://127.0.0.1:2455/v1 | Setup Guide |
| OpenAI Python SDK | http://127.0.0.1:2455/v1 | Setup Guide |
| Any OpenAI-compatible client | http://127.0.0.1:2455/v1 | API Reference |
Authentication
API key authentication is disabled by default. When disabled, any client can connect without credentials.To enable API key authentication, go to Settings → API Key Auth in the dashboard.
See API Key Authentication for details.
- Create an API key in the dashboard (API Keys → Create)
- Configure your client to pass the key as a Bearer token:
- The key is shown only once at creation — save it securely
Configuration Pattern
All clients follow a similar setup pattern:Set the base URL
Point the client at Codex-LB’s endpoint:
/v1for OpenAI-compatible clients/backend-api/codexfor Codex CLI
Configure authentication (optional)
If API key auth is enabled, provide your key:
- Via environment variable (recommended)
- Directly in config (less secure)
Next Steps
Codex CLI
Configure the Codex CLI and IDE extensions
OpenCode
Set up OpenCode AI coding assistant
OpenClaw
Configure OpenClaw agent framework
Python SDK
Use the OpenAI Python SDK with Codex-LB
Troubleshooting
Connection refused errors
Connection refused errors
Ensure Codex-LB is running and accessible:If using Docker, verify port mappings:
Authentication errors
Authentication errors
If you see
401 Unauthorized:- Check if API key auth is enabled in Settings
- Verify your API key is valid and not expired
- Ensure the key is passed correctly as a Bearer token
- Check that the key has permission for the requested model
Model not found
Model not found
If a model isn’t available:
- Verify the model exists in your dashboard
- Check that at least one account supports the model
- If using API keys with model restrictions, ensure the key allows that model
- Run model sync: Settings → Sync Models
Rate limit errors
Rate limit errors
If you hit rate limits:
- Check your API key’s rate limits in the dashboard
- Review account limits and quotas
- Consider adding more accounts to the pool
- Adjust rate limits in API Keys → Edit