Skip to main content

Prerequisites

Before installing Ollama API Proxy, ensure you have:
  • Node.js version 18.0.0 or higher
  • npm package manager (comes with Node.js)
  • At least one API key from:
    • OpenAI
    • Google Gemini
    • OpenRouter

Installation Steps

1

Clone the repository

Clone the Ollama API Proxy repository from GitHub:
git clone https://github.com/xrip/ollama-api-proxy.git
cd ollama-api-proxy
2

Install dependencies

Install the required npm packages:
npm install
This will install the following dependencies:
  • @ai-sdk/google - Google Gemini integration
  • @ai-sdk/openai - OpenAI integration
  • ai - AI SDK core
  • dotenv - Environment variable management
3

Configure environment variables

Create a .env file in the project root with your API keys:
OPENAI_API_KEY=your_openai_api_key
GEMINI_API_KEY=your_gemini_api_key
OPENROUTER_API_KEY=your_openrouter_api_key
OPENROUTER_API_URL=https://openrouter.ai/api/v1
PORT=11434
You only need to provide API keys for the providers you plan to use. The proxy will automatically detect which providers are available based on the configured API keys.
The OPENROUTER_API_URL is optional and defaults to https://openrouter.ai/api/v1 if not specified.
4

Start the server

Launch the proxy server:
npm start
You should see output similar to:
🚀 Ollama Proxy with Streaming running on http://localhost:11434
🔑 Providers: openai, google, openrouter
📋 Available models: gpt-4o-mini, gpt-4.1-mini, gemini-2.5-flash, deepseek-r1

Verify Installation

Test that the server is running correctly:
curl http://localhost:11434/api/version
You should receive a JSON response:
{
  "version": "1.0.1e"
}

Custom Model Configuration

You can customize available models by creating a models.json file in the project root:
models.json
{
  "my-custom-gpt": { 
    "provider": "openai", 
    "model": "gpt-4o-mini" 
  },
  "my-gemini-pro": { 
    "provider": "google", 
    "model": "gemini-pro" 
  },
  "my-openrouter-model": { 
    "provider": "openrouter", 
    "model": "mistralai/mistral-7b-instruct-v0.2" 
  }
}
If no API keys are configured, the server will exit with an error:
❌ No API keys found. Set OPENAI_API_KEY, GEMINI_API_KEY, or OPENROUTER_API_KEY

Development Mode

For development with hot reloading (requires Bun):
bun run dev

Next Steps

Configure JetBrains

Set up JetBrains AI Assistant with the proxy

API Reference

Explore available API endpoints

Build docs developers (and LLMs) love