Skip to main content

Route, manage, and analyze your LLM requests

A unified API gateway for multiple LLM providers with built-in analytics, caching, and cost tracking

Quick start

Get up and running with LLM Gateway in minutes

1

Sign up for an account

Visit llmgateway.io to create your account, or self-host the gateway on your own infrastructure.
2

Get your API key

After signing in, navigate to your project settings and generate an API key. This key will authenticate your requests to the gateway.
3

Make your first request

Use the OpenAI-compatible API to route requests through the gateway:
cURL
curl -X POST https://api.llmgateway.io/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "Hello! I'm doing well, thank you for asking. How can I assist you today?"
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 17,
    "total_tokens": 30
  }
}
4

View analytics

Check your dashboard to see request logs, token usage, costs, and performance metrics across all your LLM calls.

Core features

Everything you need to manage your LLM infrastructure

Unified API interface

OpenAI-compatible API that works with all major LLM providers. Drop-in replacement for existing integrations.

Multi-provider support

Connect to OpenAI, Anthropic, Google, AWS Bedrock, and more through a single gateway.

Usage analytics

Track requests, tokens, costs, and performance metrics with detailed dashboards and exportable reports.

Response caching

Reduce costs and latency with intelligent Redis-based response caching across providers.

API key management

Generate, rotate, and manage API keys with fine-grained permissions and usage limits.

Guardrails

Implement content filters, rate limits, and safety policies to protect your applications.

Explore by topic

Learn how to use LLM Gateway for your use case

Projects & organizations

Organize your work with projects and manage team access with organizations.

Playground

Test and compare models interactively with the built-in playground.

MCP integration

Connect LLM Gateway with Model Context Protocol-compatible tools.

OpenAI SDK

Use the OpenAI Python or Node.js SDK with LLM Gateway.

LangChain

Integrate LLM Gateway with your LangChain applications.

Vercel AI SDK

Build AI-powered apps with Vercel AI SDK and LLM Gateway.

API reference

Complete API documentation for the Gateway and Management APIs

Gateway API

OpenAI-compatible endpoints for chat completions, images, and models.

Management API

Manage API keys, provider keys, projects, organizations, and view activity logs.

Ready to get started?

Start routing your LLM requests through a unified gateway with built-in analytics and caching.

Get Started