Build AI agents with composable harnesses
LLM Gateway is an agent framework built on three simple ideas: a harness yields events, harnesses compose, and events form a graph.
Key Features
Everything you need to build production-ready AI agents
Composable Harnesses
Multi-Agent Orchestration
Event-Driven Architecture
Multiple LLM Providers
Human-in-the-Loop
Recursive Processing
Quick Start
Get up and running in minutes
Install dependencies
Explore the architecture
- Harnesses - The fundamental async generator primitive
- Events - The event types that flow through the system
- Composition - How harnesses wrap and layer behavior
- Conversation Graph - How events form an immutable graph
Explore by Use Case
Pick the components you need for your use case
→ Stream an LLM call
Use a provider harness to stream responses from any LLM with a unified interface.
→ Single agent with tools
Wrap a provider with the agent harness to get an agentic loop with built-in tool execution.
→ Multi-agent orchestration
Use the orchestrator to manage concurrent agents and coordinate permission relays.
→ Client-side rendering
Use graph projections to render threaded conversations with nested subagent branches.
Ready to build?
Start with the quickstart guide or dive into the API reference to explore all available harnesses, tools, and utilities.
