Overview
Cursor IDE is an AI-powered code editor that supports OpenAI-compatible API endpoints. You can configure Cursor to use CLI Proxy API as a custom model provider, giving you access to Google/ChatGPT/Claude OAuth subscriptions through Cursor’s interface.Configuration
Start CLI Proxy API
Ensure CLI Proxy API is running:The server will listen on
http://localhost:8317 by default.Open Cursor Settings
In Cursor IDE:
- Open Settings (Cmd+, on macOS, Ctrl+, on Windows/Linux)
- Navigate to Models → OpenAI API Key
Configure API Endpoint
Set up the custom API endpoint:
- API Key: Use any key from your
api-keyslist inconfig.yaml - Base URL:
http://localhost:8317/v1
Select Models
In the Cursor chat interface, you can now select models from your CLI Proxy API providers:
- Gemini models:
gemini-2.5-pro,gemini-2.5-flash, etc. - Claude models:
claude-sonnet-4,claude-opus-4, etc. - OpenAI models:
gpt-5,gpt-5-mini, etc. - Custom models: Any models from your OpenAI-compatible providers
Configuration Examples
Using Gemini OAuth
If you have Gemini CLI OAuth configured:config.yaml
Using Claude OAuth
If you have Claude Code OAuth configured:config.yaml
Using Multiple Providers
You can switch between different providers by changing the model selection in Cursor:config.yaml
Advanced Configuration
Model Prefixes
If you have multiple credentials with prefixes:config.yaml
Custom Endpoints
For HTTPS or custom ports:config.yaml
Features
Streaming Responses
Cursor supports streaming responses, which work seamlessly with CLI Proxy API:- Real-time code generation
- Progressive responses for chat
- Instant feedback on AI suggestions
Function Calling
If your selected model supports function calling (e.g., Gemini, OpenAI), Cursor can leverage this for:- Code analysis
- File operations
- Terminal commands
Multimodal Input
For models that support images (e.g., Gemini, Claude):- Attach screenshots to chat
- Analyze UI designs
- Debug visual issues
Troubleshooting
Connection Refused
If Cursor shows “Connection refused”:- Verify CLI Proxy API is running:
curl http://localhost:8317/v1/models - Check the port in your config matches the Base URL
- Ensure no firewall is blocking localhost connections
Invalid API Key
If you see “Invalid API key”:- Verify the API key exists in your
config.yamlunderapi-keys - Check for leading/trailing whitespace
- Restart CLI Proxy API after config changes
Model Not Available
If a model doesn’t appear in Cursor:-
Authenticate with the provider first (e.g.,
cliproxyapi gemini login) - Verify the provider is configured correctly
-
Check
/v1/modelsendpoint to see available models:
Rate Limiting
If you hit rate limits:- Configure multiple accounts for round-robin load balancing
-
Use the
routing.strategyoption inconfig.yaml:config.yaml
Best Practices
- Use OAuth providers when possible for better quota limits
- Configure multiple accounts for load balancing
- Enable debug logging during initial setup:
debug: true - Use HTTPS in production environments with TLS certificates
- Restrict API keys by using different keys for different projects
Example Workflow
Configure Cursor
- Base URL:
http://localhost:8317/v1 - API Key:
your-api-key-1 - Model:
gemini-2.5-pro(or any available model)
See Also
- Cline Integration - Another VS Code extension that works with CLI Proxy API
- OpenRouter Integration - Add custom OpenAI-compatible providers
- Amp CLI Integration - Use Amp CLI with your OAuth subscriptions