This guide will help you set up the Ollama API Proxy and test it with a real request. You’ll be able to use commercial LLMs like OpenAI, Google Gemini, and OpenRouter with tools that support the Ollama API format.
Prerequisites
Before you begin, ensure you have:- Node.js 18.0.0 or higher OR Bun 1.2.0 or higher
- At least one API key from:
- OpenAI
- Google AI Studio (for Gemini)
- OpenRouter
Quick Start with npx/bunx
Set up environment variables
Create a Optional configuration:
.env file with your API keys. You only need to include the providers you want to use:Verify the server is running
Open a new terminal and check the server status:Expected response:List available models:This will return all available models based on your configured API keys.
Configure JetBrains AI Assistant
Now that your proxy is running, you can connect JetBrains AI Assistant to use commercial LLMs:Open JetBrains AI Assistant settings
In your JetBrains IDE:
- Go to Settings/Preferences → Tools → AI Assistant
- Select Ollama as your provider
Select your model
Choose one of the available models from the dropdown:
- gpt-4o-mini - Fast and efficient OpenAI model (recommended for most tasks)
- gpt-4o - More capable OpenAI model
- gemini-2.5-flash - Google’s fast model with large context
- deepseek-r1 - Free reasoning model via OpenRouter
Supported API Endpoints
The proxy implements the following Ollama-compatible endpoints:/api/chat
Chat completion with message history support
/api/generate
Single-turn text generation from a prompt
/api/tags
List all available models
/api/version
Get proxy version information
Next Steps
API Reference
Explore all available API endpoints and parameters
Model Configuration
Customize available models with models.json
Environment Variables
Learn about all configuration options
Installation Methods
Explore alternative installation methods (npm, Docker, etc.)
Troubleshooting
Error: No API keys found
Error: No API keys found
Make sure you have at least one API key set in your The proxy requires at least one valid API key to start.
.env file:Error: Model not supported
Error: Model not supported
The requested model doesn’t exist in your configuration. Check available models:Make sure you have the required API key for the model’s provider.
Error: Provider not available
Error: Provider not available
The model’s provider doesn’t have a valid API key configured. For example:
gpt-4o-minirequiresOPENAI_API_KEYgemini-2.5-flashrequiresGEMINI_API_KEYdeepseek-r1requiresOPENROUTER_API_KEY
Port already in use
Port already in use
If port 11434 is already in use (e.g., by Ollama itself), change the port:Then update your client configuration to use
.env
http://localhost:11435.Connection refused in JetBrains
Connection refused in JetBrains
- Verify the proxy is running:
curl http://localhost:11434/api/version - Check that no firewall is blocking the connection
- Ensure you’re using the correct URL format:
http://localhost:11434(nothttps)
