Quickstart
Get up and running with LLM Gateway in just a few minutes. This guide will take you from installation to making your first LLM call with tools.What You’ll Build
By the end of this quickstart, you’ll have:- A working LLM Gateway setup
- Your first streaming LLM call
- An agent that can execute shell commands
- Understanding of how to add custom tools
Prerequisites
- Bun 1.0+ installed
- An API key from OpenRouter, Anthropic, or Zen
import { createGeneratorHarness } from "./packages/ai/harness/providers/zen";
const harness = createGeneratorHarness();
for await (const event of harness.invoke({
model: "glm-4.7",
messages: [{ role: "user", content: "What is the sum of the first 10 primes?" }],
})) {
if (event.type === "reasoning") process.stderr.write(event.content); // thinking
if (event.type === "text") process.stdout.write(event.content);
}
import { createAgentHarness } from "./packages/ai/harness/agent";
import { createGeneratorHarness } from "./packages/ai/harness/providers/zen";
import { bashTool } from "./packages/ai/tools";
const agent = createAgentHarness({ harness: createGeneratorHarness() });
for await (const event of agent.invoke({
model: "glm-4.7",
messages: [{ role: "user", content: "List the files in this directory" }],
tools: [bashTool],
permissions: { allowlist: [{ tool: "bash" }] },
})) {
if (event.type === "reasoning") process.stderr.write(event.content);
if (event.type === "text") process.stdout.write(event.content);
if (event.type === "tool_call") console.log(`\n[calling ${event.name}]`);
if (event.type === "tool_result") console.log(`[result]`, event.output);
}
The
permissions parameter controls which tools the agent can use. Use allowlist for auto-approval, or omit it for human-in-the-loop approval.Understanding Events
LLM Gateway is built around a simple event-driven architecture. Every harness yields events as an async generator:runId- Which LLM call produced this eventparentId- Which run spawned this one (for subagents)
Next Steps
Add Custom Tools
Learn how to create your own tools for agents to use
Multi-Agent Systems
Build systems with multiple concurrent agents
Client Integration
Build UIs that consume agent events
API Reference
Explore the full HTTP API
Common Patterns
Choosing a Provider
LLM Gateway supports multiple LLM providers:Tool Permissions
Control tool execution with fine-grained permissions:Human-in-the-Loop
Pause execution for human approval:Troubleshooting
Error: No model specified
Error: No model specified
Make sure you set
DEFAULT_MODEL in your .env file or pass the model parameter to invoke().Error: API key not found
Error: API key not found
Check that your
.env file has the correct API key variable:ZEN_API_KEYfor ZenOPENROUTER_API_KEYfor OpenRouterANTHROPIC_API_KEYfor Anthropic
Tool execution fails silently
Tool execution fails silently
Make sure you:
- Passed
tools: [...]toinvoke() - Used
createAgentHarness()not just the provider harness - Set appropriate permissions
Events not streaming
Events not streaming
Ensure you’re using
for await to consume the async generator: