Skip to main content

Get started in 3 steps

This guide will get you from zero to making your first proxied ChatGPT request through Codex-LB.
1

Run Codex-LB with Docker

The fastest way to get started is with Docker. Run these commands to launch Codex-LB:
# Create a volume for persistent data
docker volume create codex-lb-data

# Run Codex-LB
docker run -d --name codex-lb \
  -p 2455:2455 -p 1455:1455 \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest
Ports:
  • 2455 — Main API and dashboard
  • 1455 — OAuth callback endpoint (required for account linking)
Verify it’s running:
curl http://localhost:2455/health
You should see:
{"status":"ok"}
2

Add a ChatGPT account

Open the dashboard and add your first account:
  1. Navigate to http://localhost:2455
  2. Click Accounts in the sidebar
  3. Click Add Account
  4. Complete the OAuth flow to link your ChatGPT account
You’ll be redirected to OpenAI for authentication. After signing in, you’ll be redirected back to the dashboard.
Once added, your account will appear in the accounts list with its current status and usage stats.
3

Make your first request

Now test the proxy with a simple request:
curl http://localhost:2455/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.3-codex",
    "messages": [
      {"role": "user", "content": "Say hello!"}
    ]
  }'
API key authentication is disabled by default. To enable it, go to Settings → API Key Auth in the dashboard.
You should receive a response from ChatGPT proxied through Codex-LB. Check the Request Logs page in the dashboard to see your request details.

What just happened?

You’ve successfully:
  • ✅ Deployed Codex-LB with Docker
  • ✅ Linked a ChatGPT account via OAuth
  • ✅ Proxied a request through Codex-LB
  • ✅ Tracked usage in the dashboard

Next steps

Configure a client

Set up Codex CLI, OpenCode, or another OpenAI-compatible client

Create API keys

Enable authentication and create API keys with rate limits

Add more accounts

Pool multiple ChatGPT accounts for higher capacity

Advanced installation

Explore uvx, local development, and production deployment options

Try with the OpenAI SDK

If you have the OpenAI Python SDK installed, you can test Codex-LB immediately:
from openai import OpenAI

client = OpenAI(
    base_url="http://127.0.0.1:2455/v1",
    api_key="any-string-works",  # auth disabled by default
)

response = client.chat.completions.create(
    model="gpt-5.3-codex",
    messages=[{"role": "user", "content": "Hello!"}],
)

print(response.choices[0].message.content)
Replace "any-string-works" with a real API key from the dashboard if you enable API key authentication.

Troubleshooting

If port 2455 or 1455 is already taken, map to different ports:
docker run -d --name codex-lb \
  -p 3000:2455 -p 2000:1455 \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest
Port 1455 is used for OAuth callbacks. If you change it, you must update CODEX_LB_OAUTH_CALLBACK_PORT in your environment variables.
Models are fetched from your linked ChatGPT accounts. If no models appear:
  1. Ensure at least one account is successfully linked
  2. Wait a moment for the model sync to complete
  3. Check the account status in the dashboard

Need more help?

Visit our full troubleshooting guide for common issues and solutions

Build docs developers (and LLMs) love