Skip to main content

Overview

The npx method allows you to run Ollama API Proxy without cloning the repository or installing it globally. This is the fastest way to get started.
npx automatically downloads and executes the latest version of the package, making it perfect for quick testing or one-time usage.

Prerequisites

  • Node.js version 18.0.0 or higher
  • At least one API key from OpenAI, Google Gemini, or OpenRouter

Quick Start

1

Create configuration directory

Create a directory for your configuration files:
mkdir ollama-proxy-config
cd ollama-proxy-config
2

Set up environment variables

Create a .env file with your API keys:
OPENAI_API_KEY=your_openai_api_key
GEMINI_API_KEY=your_gemini_api_key
OPENROUTER_API_KEY=your_openrouter_api_key
OPENROUTER_API_URL=https://openrouter.ai/api/v1
PORT=11434
The proxy will load the .env file from your current working directory using the dotenv package.
3

Run the proxy server

Execute the proxy using npx:
npx ollama-api-proxy
The first time you run this command, npx will download the package. Subsequent runs will use the cached version.Expected output:
🚀 Ollama Proxy with Streaming running on http://localhost:11434
🔑 Providers: openai, google, openrouter
📋 Available models: gpt-4o-mini, gpt-4.1-mini, gemini-2.5-flash, deepseek-r1

Advanced Usage

Specify Package Version

Run a specific version of the proxy:

Use Latest Version

Force npx to fetch the latest version:
npx --yes ollama-api-proxy@latest

Run with Custom Port

Specify a custom port via environment variable:
PORT=8080 npx ollama-api-proxy

Custom Model Configuration

Create a models.json file in your working directory to customize available models:
models.json
{
  "gpt-4o": { 
    "provider": "openai", 
    "model": "gpt-4o" 
  },
  "gemini-flash": { 
    "provider": "google", 
    "model": "gemini-2.5-flash" 
  },
  "deepseek": { 
    "provider": "openrouter", 
    "model": "deepseek/deepseek-r1-0528:free" 
  }
}
The proxy automatically loads models.json from the current working directory (see src/index.js:124).

Environment Variables

VariableRequiredDefaultDescription
OPENAI_API_KEYConditional*-OpenAI API key
GEMINI_API_KEYConditional*-Google Gemini API key
OPENROUTER_API_KEYConditional*-OpenRouter API key
OPENROUTER_API_URLNohttps://openrouter.ai/api/v1OpenRouter API endpoint
PORTNo11434Server port
NODE_ENVNo-Environment mode (production or development)
*At least one API key is required. The proxy will exit with an error if no keys are configured.

Verify Installation

Test the proxy server:
curl http://localhost:11434/api/version
Response:
{
  "version": "1.0.1e"
}
List available models:
curl http://localhost:11434/api/tags

Using with Bun

If you have Bun installed (version 1.2.0 or higher), you can use bunx for faster execution:
bunx ollama-api-proxy
Bun offers faster startup times and better performance compared to Node.js.

Troubleshooting

No API Keys Found

If you see this error:
❌ No API keys found. Set OPENAI_API_KEY, GEMINI_API_KEY, or OPENROUTER_API_KEY
Ensure your .env file exists in the current working directory and contains at least one valid API key.

Port Already in Use

If port 11434 is already in use:
PORT=8080 npx ollama-api-proxy

Package Not Found

If npx cannot find the package, verify the package name:
npm view ollama-api-proxy

Next Steps

Docker Installation

Deploy using Docker containers

Usage Guide

Configure JetBrains AI Assistant

Build docs developers (and LLMs) love