Installation
This guide covers installing LLM Gateway, configuring your environment, and understanding the project structure.System Requirements
Runtime
Bun 1.0+ required
Install Bun →
Install Bun →
Operating System
macOS, Linux, or WSL2
Windows native not tested
Windows native not tested
Node Version
Not required
Bun replaces Node.js
Bun replaces Node.js
Memory
2GB+ RAM recommended
For running local models
For running local models
Installation Steps
View Core Dependencies
Runtime & Server:
hono- Web framework for HTTP/SSE endpointseffect- Error handling and retriesuuid- Generate unique IDs for events
@anthropic-ai/sdk- Anthropic Claude integrationopenai- OpenAI and compatible APIs@openrouter/sdk- OpenRouter integration
picomatch- Glob pattern matching for permissionszod- Schema validation and JSON Schema generationoxfmt- Code formatter
react/react-dom- Web client UI@vitejs/plugin-react- Build tooling
# API Keys - Add at least one
ZEN_API_KEY=your_zen_api_key_here
OPENROUTER_API_KEY=
ANTHROPIC_API_KEY=
# Model Configuration
DEFAULT_MODEL=glm-4.7
# Logging
LOG_LEVEL=I # I=Info, W=Warn, E=Error
# Server Configuration
PORT=4000
VITE_PORT=4001
VITE_BACKEND_URL=http://localhost:4000
You only need one API key to get started. Choose based on your preferred provider:
- Zen: Fastest, supports reasoning, free tier available
- OpenRouter: 100+ models, pay-per-use
- Anthropic: Direct access to Claude models
Project Structure
Understanding the codebase structure will help you navigate and extend LLM Gateway:Key Modules Explained
Key Modules Explained
Harnesses (
packages/ai/harness/)The fundamental primitive - async generators that yield events:- Provider harnesses make single LLM API calls
- Agent harness wraps providers to add tool execution loop
- Harnesses compose:
createAgentHarness({ harness: createGeneratorHarness() })
packages/ai/tools/)Executable capabilities for agents:- Each tool has a Zod schema, description, and execute function
- Tools return
{ context, result }- context goes to the agent, result to your code - Permissions derived from tool parameters using glob patterns
packages/ai/client/)Client-side state management:- Events → Hypergraph → Projections → UI
projectThread()for chat UI,projectMessages()for API format- SSE transport for streaming, HTTP transport for relay resolution
orchestrator.ts)Manages multiple concurrent agents:- Multiplexes agent streams via
AgentMultiplexer - Handles permission relays between agents and humans
- Automatic cleanup when agents complete
Development Commands
LLM Gateway provides several npm scripts for development:Server Development
GET /models- List available modelsPOST /chat- SSE streaming chat endpointPOST /chat/relay/:relayId- Permission resolutionPOST /summarize- Conversation summarization
Web Client Development
CLI Client
Testing
Tests only output on failure (quiet success, loud failure). This is intentional - see Development Principles in
CLAUDE.md.Code Formatting
Build for Production
Environment Variables Reference
Complete list of environment variables:API key for Zen provider (OpenAI-compatible with reasoning support)
API key for OpenRouter (access to 100+ models)
API key for Anthropic Claude models
Default model to use when none specified in requests
Logging verbosity:
I (Info), W (Warn), E (Error)Server port for HTTP/SSE endpoints
Vite dev server port for web client
Backend URL for web client to connect to
Provider Setup
Detailed setup for each LLM provider:- Zen
- OpenRouter
- Anthropic
- OpenAI
Zen is the default provider, optimized for speed and supporting reasoning tokens.Get an API Key:Supported Models:
- Visit opencode.ai/zen
- Sign up for an account
- Generate an API key
.env
glm-4.7- Fast, supports reasoningnvidia/nemotron-nano-9b-v2:free- Free tierkimi-k2.5- Large context window
Docker Setup (Optional)
Run LLM Gateway in a container:Dockerfile
docker-compose.yml
Troubleshooting
Bun installation fails
Bun installation fails
Issue:
curl -fsSL https://bun.sh/install | bash failsSolutions:- Use the manual installation: Download from bun.sh/install
- On macOS:
brew install bun - Check system requirements: macOS 11+, Linux kernel 5.1+
- For WSL2: Make sure you’re using Ubuntu 20.04+
Dependencies fail to install
Dependencies fail to install
Issue:
bun install errors or hangsSolutions:- Clear Bun cache:
rm -rf ~/.bun/install/cache - Update Bun:
bun upgrade - Check internet connection and firewall
- Try with
--verboseflag:bun install --verbose
Server won't start
Server won't start
Issue:
bun run dev:server failsSolutions:- Check if port 4000 is in use:
lsof -i :4000 - Verify
.envfile exists and has valid API key - Check logs for specific error messages
- Try different port:
PORT=3000 bun run dev:server
API calls return 401 Unauthorized
API calls return 401 Unauthorized
Tests fail
Tests fail
Issue:
bun test shows failuresSolutions:- Ensure API keys are set (tests use real integrations)
- Check if provider APIs are accessible
- Verify no rate limiting is occurring
- Run individual test files to isolate issue
Next Steps
Quickstart Guide
Build your first agent in 5 minutes
Core Concepts
Understand harnesses, events, and composition
API Reference
Explore HTTP endpoints and types
Tool Development
Create custom tools for your agents
