Skip to main content

Overview

LLM Gateway uses environment variables for provider API keys, server configuration, and logging. Create a .env file in your project root based on .env.example.

Quick Start

cp .env.example .env
Edit the .env file with your configuration:
# Required: At least one provider API key
OPENROUTER_API_KEY=your_key_here
ZEN_API_KEY=your_key_here

# Optional: Server configuration
DEFAULT_MODEL=glm-4.7
PORT=4000
LOG_LEVEL=I

# Optional: Development configuration
VITE_PORT=4001
VITE_BACKEND_URL=http://localhost:4000

Provider API Keys

OPENROUTER_API_KEY

OPENROUTER_API_KEY
string
API key for OpenRouter provider access.Required when: Using OpenRouter harnessObtain from: openrouter.ai
OPENROUTER_API_KEY=sk-or-v1-...

ZEN_API_KEY

ZEN_API_KEY
string
API key for Zen provider access.Required when: Using Zen harness (default)Obtain from: Your Zen API provider
ZEN_API_KEY=zen_...

ANTHROPIC_API_KEY

ANTHROPIC_API_KEY
string
API key for Anthropic Claude models.Required when: Using Anthropic harnessObtain from: console.anthropic.com
ANTHROPIC_API_KEY=sk-ant-...
Not in .env.example but supported by the Anthropic harness.

OPENAI_API_KEY

OPENAI_API_KEY
string
API key for OpenAI models.Required when: Using OpenAI harnessObtain from: platform.openai.com
OPENAI_API_KEY=sk-...
Not in .env.example but supported by the OpenAI harness.
You must configure at least one provider API key for the server to function.

Server Configuration

DEFAULT_MODEL

DEFAULT_MODEL
string
default:"undefined"
Default model identifier used when clients don’t specify a model in requests.Examples:
  • glm-4.7 - Zen model
  • kimi-k2.5 - Kimi model for long context
  • claude-3-5-sonnet-20241022 - Anthropic model
  • gpt-4o - OpenAI model
DEFAULT_MODEL=glm-4.7
The model must be supported by your configured provider. Validate available models via the /models endpoint.

PORT

PORT
number
default:"4000"
Port number for the HTTP server to listen on.
PORT=4000
Common values:
  • 4000 - Default development port
  • 3000 - Alternative development port
  • 8080 - Common production port
  • 80 - HTTP (requires elevated permissions)
  • 443 - HTTPS (requires SSL setup)

LOG_LEVEL

LOG_LEVEL
string
default:"I"
Controls logging verbosity for the server.Allowed values:
ValueLevelDescription
DDebugVerbose debugging information
IInfoGeneral informational messages (default)
WWarnWarning messages only
EErrorError messages only
LOG_LEVEL=I
Example output:
I abc123 req_start model=glm-4.7
I abc123 req_end dur=2341ms

Development Configuration

These variables are used by the development environment and client applications.

VITE_PORT

VITE_PORT
number
default:"4001"
Port number for the Vite development server (web client).
VITE_PORT=4001
Usage:
bun run dev:web  # Starts Vite on this port
Must be different from PORT to avoid conflicts.

VITE_BACKEND_URL

VITE_BACKEND_URL
string
default:"http://localhost:4000"
URL of the LLM Gateway backend server for client connections.
VITE_BACKEND_URL=http://localhost:4000
Examples:
  • http://localhost:4000 - Local development
  • http://192.168.1.100:4000 - LAN development
  • https://api.yourdomain.com - Production
The web client uses this URL to establish SSE connections for chat streaming.

Environment-Specific Configurations

# .env.development
ZEN_API_KEY=your_dev_key
DEFAULT_MODEL=glm-4.7
PORT=4000
LOG_LEVEL=D  # Debug logging
VITE_PORT=4001
VITE_BACKEND_URL=http://localhost:4000

Loading Environment Variables

Bun Automatic Loading

Bun automatically loads .env files:
bun run server/index.ts  # Loads .env automatically

Explicit Environment File

Specify a different environment file:
bun --env-file=.env.production run server/index.ts

CLI Development

The CLI client requires explicit environment file loading:
cd clients/cli
bun --env-file=../../.env run index.tsx

Security Best Practices

1

Never Commit API Keys

Add .env to .gitignore to prevent committing secrets:
.env
.env.local
.env.*.local
2

Use Different Keys Per Environment

Maintain separate API keys for development, staging, and production.
3

Rotate Keys Regularly

Periodically rotate API keys and update environment variables.
4

Restrict Key Permissions

Use provider-specific settings to limit key capabilities and spending.
5

Use Secret Management in Production

Consider using secret management services:
  • AWS Secrets Manager
  • HashiCorp Vault
  • Azure Key Vault
  • Google Cloud Secret Manager
Never include actual API keys in documentation, code examples, or public repositories.

Validation

Validate your environment configuration:
# Check if required variables are set
if [ -z "$ZEN_API_KEY" ] && [ -z "$OPENROUTER_API_KEY" ]; then
  echo "Error: At least one provider API key must be set"
  exit 1
fi

# Test server startup
bun run server/index.ts

# Test model availability
curl http://localhost:4000/models

Docker Configuration

When using Docker, pass environment variables via:

docker-compose.yml

version: '3.8'
services:
  llm-gateway:
    build: .
    ports:
      - "4000:4000"
    env_file:
      - .env
    # Or explicit environment:
    environment:
      - ZEN_API_KEY=${ZEN_API_KEY}
      - DEFAULT_MODEL=glm-4.7
      - PORT=4000
      - LOG_LEVEL=I

Docker Run

docker run \
  -p 4000:4000 \
  -e ZEN_API_KEY="$ZEN_API_KEY" \
  -e DEFAULT_MODEL=glm-4.7 \
  -e LOG_LEVEL=I \
  llm-gateway

Troubleshooting

Problem: API key not found or invalidSolutions:
  • Verify .env file exists in project root
  • Check API key is uncommented and has no spaces
  • Validate key with provider’s API directly
  • Ensure correct provider harness is configured
Problem: Requests use unexpected modelSolutions:
  • Verify DEFAULT_MODEL is set correctly
  • Confirm model is supported: curl http://localhost:4000/models
  • Check client requests include explicit model parameter
  • Review server logs for model validation errors
Problem: Error: listen EADDRINUSE: address already in useSolutions:
  • Change PORT to an unused port
  • Kill existing process: lsof -ti:4000 | xargs kill
  • Check for other services using the port
Problem: Variables appear undefined at runtimeSolutions:
  • Verify .env file is in the correct directory
  • Check file has no syntax errors (no quotes around values)
  • Use --env-file flag explicitly if not in project root
  • Confirm Bun version supports automatic .env loading

Reference

Complete .env.example

# Provider API Keys (at least one required)
OPENROUTER_API_KEY=
ZEN_API_KEY=

# Server Configuration
DEFAULT_MODEL=
LOG_LEVEL=I

# Network Configuration
PORT=4000
VITE_PORT=4001
VITE_BACKEND_URL=http://localhost:4000

All Available Variables

VariableTypeDefaultRequiredDescription
OPENROUTER_API_KEYstring-ConditionalOpenRouter API key
ZEN_API_KEYstring-ConditionalZen API key
ANTHROPIC_API_KEYstring-ConditionalAnthropic API key
OPENAI_API_KEYstring-ConditionalOpenAI API key
DEFAULT_MODELstring-NoDefault model for requests
PORTnumber4000NoServer port
LOG_LEVELstringINoLogging verbosity (D/I/W/E)
VITE_PORTnumber4001NoVite dev server port
VITE_BACKEND_URLstringhttp://localhost:4000NoBackend URL for clients
At least one provider API key (OPENROUTER_API_KEY, ZEN_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY) is required.

Next Steps

Server Setup

Learn how to set up and run the server

Configuration

Advanced configuration options

Build docs developers (and LLMs) love