Built-in Provider Support
Codex includes built-in support for these providers:- OpenAI (default) - OpenAI’s models including GPT-4, GPT-5, o-series
- Azure OpenAI - Enterprise Azure deployment
- Anthropic - Claude models via OpenAI-compatible endpoint
- OpenRouter - Access to multiple model providers
- Ollama - Local model inference
- LM Studio - Local model hosting
- Together AI - Fast inference for open models
- Mistral AI - Mistral and Mixtral models
- DeepSeek - DeepSeek models
- Groq - Ultra-fast LLM inference
- xAI - Grok models
- Gemini - Google’s Gemini models
Configuring a Custom Provider
Define custom providers in the[model_providers] section:
Provider Configuration Options
Friendly display name for the provider
Base URL for the provider’s OpenAI-compatible API
Environment variable name that stores the API key
Help text for obtaining and setting the API key
Static HTTP headers to include in requests (key-value pairs)
HTTP headers with values from environment variables (header name → env var name)
Query parameters to append to API requests
Whether this provider requires OpenAI authentication (for proxies/gateways)
Which wire protocol the provider expects (currently only
"responses" is supported)Whether the provider supports Responses API WebSocket transport
Common Provider Examples
Ollama (Local Models)
Run models locally with Ollama:Azure OpenAI
Use Azure’s OpenAI deployment:OpenRouter
Access multiple providers through OpenRouter:Anthropic (Claude)
Use Claude models via Anthropic’s API:Anthropic’s API may require adapter middleware for full OpenAI compatibility. Consider using OpenRouter for easier Claude access.
Together AI
Use open models via Together AI:DeepSeek
Use DeepSeek models:Groq
Fast inference with Groq:Mistral AI
Use Mistral models:Advanced Provider Configuration
Custom HTTP Headers
Include static headers in requests:Dynamic Headers from Environment
Load header values from environment variables:Retry and Timeout Configuration
Maximum HTTP request retries on failure
Idle timeout in milliseconds before treating streaming connection as lost
Maximum reconnection attempts for dropped streams
Switching Providers
You can switch providers in several ways:In Configuration
Via CLI Flag
Via Environment Variable
Using Profiles
Testing Provider Configuration
Test your provider setup:Troubleshooting
Connection refused or timeout
Connection refused or timeout
- Verify the base URL is correct
- Check if the service is running (for local providers)
- Test with curl:
curl $BASE_URL/models - Check firewall/network settings
Authentication errors
Authentication errors
- Verify the API key environment variable is set
- Check the environment variable name matches
env_key - Ensure the API key has required permissions
- Try authenticating with the provider’s native CLI
API compatibility issues
API compatibility issues
- Verify the provider implements OpenAI-compatible endpoints
- Check if the model name is valid for the provider
- Review provider documentation for any non-standard behaviors
- Some providers may need middleware for full compatibility
Model not found
Model not found
- Verify the model name exists on the provider
- Check capitalization and exact spelling
- For local providers (Ollama), ensure model is pulled
- Try listing available models via provider API
Provider Compatibility Notes
While Codex supports any OpenAI-compatible API, some features may have varying support:- Streaming - Most providers support streaming responses
- Function calling - Required for Codex tool use; verify provider support
- Vision - Image input requires multimodal model support
- Reasoning effort - Only supported by reasoning-capable models (o-series)
- WebSocket transport - Optional; falls back to HTTP streaming
Complete Example
Here’s a full configuration with multiple providers:Next Steps
MCP Servers
Integrate Model Context Protocol servers
Configuration Reference
Complete reference documentation