Overview
The models endpoint returns a list of available AI models accessible through your Codex-LB instance. Each model includes metadata about capabilities, reasoning support, and configuration options.Endpoint
GET /backend-api/codex/models
Retrieves the list of available models. Base URL:https://your-codex-lb-instance.com
Query Parameters
None.Response
Always returns
"list"Array of model objects
Example Response
Example Request
Authentication
This endpoint requires authentication using an API key. Include your API key in theAuthorization header:
API Key Model Filtering
If your API key has restricted access to specific models (configured via theallowed_models field), the response will only include models that your key is authorized to use.
Rate Limiting
This endpoint counts against your API key’s rate limit, even though it doesn’t make requests to upstream AI providers. This is to enforce fair usage of the Codex-LB infrastructure.Use Cases
Selecting Models by Capability
Finding Models by Context Window
Notes
- Model availability depends on the accounts configured in your Codex-LB instance
- The
priorityfield affects load balancing decisions when multiple models are suitable - Models with
supported_in_api: falseare not available via API endpoints - The
prefer_websocketsflag indicates the recommended connection type for optimal performance - Model metadata is cached and refreshed periodically based on your configuration