Skip to main content
Get up and running with LLM Gateway in just a few minutes. This guide will walk you through creating an account, getting your API key, and making your first request.

Prerequisites

Before you begin, you’ll need:
  • An LLM Gateway account (sign up at llmgateway.io)
  • curl or your favorite HTTP client
  • An OpenAI or Anthropic API key (optional, for testing)
If you prefer to self-host, see the self-hosting guide instead.

Get started

1

Create your account

Visit llmgateway.io and sign up for a free account. You can use:
  • Email and password
  • GitHub OAuth
  • Google OAuth
  • Passkeys (WebAuthn)
After signing up, you’ll be redirected to the dashboard.
2

Create a project

Projects help you organize your API keys and track usage separately for different applications.
  1. Click New Project in the dashboard
  2. Enter a project name (e.g., “My First Project”)
  3. Select a project mode:
    • API Keys - Use your own provider API keys
    • Credits - Use pre-paid LLM Gateway credits
    • Hybrid - Use both API keys and credits
  4. Click Create Project
For this quickstart, select “API Keys” mode. You’ll add your provider keys in the next step.
3

Add a provider key

To route requests through LLM Gateway, you need to add at least one provider API key.
  1. In your project dashboard, navigate to Provider Keys
  2. Click Add Provider Key
  3. Select a provider (e.g., OpenAI)
  4. Paste your OpenAI API key
  5. Click Save
LLM Gateway will validate your key automatically. Once validated, you’re ready to make requests!
4

Generate an API key

Generate an LLM Gateway API key to authenticate your requests.
  1. Navigate to API Keys in your project
  2. Click Create API Key
  3. Give it a name (e.g., “Development Key”)
  4. (Optional) Set usage limits or IAM rules
  5. Click Create
  6. Copy the API key immediately - you won’t be able to see it again!
Store your API key securely. Never commit it to version control or share it publicly.
5

Make your first request

Now you’re ready to make your first LLM request through the gateway!
cURL
curl -X POST https://api.llmgateway.io/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_LLM_GATEWAY_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": "Hello! Can you explain what an API gateway is in one sentence?"
      }
    ]
  }'
Replace YOUR_LLM_GATEWAY_API_KEY with the API key you created in the previous step.
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "An API gateway is a server that acts as an intermediary between clients and backend services, handling request routing, authentication, rate limiting, and other cross-cutting concerns."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 23,
    "completion_tokens": 38,
    "total_tokens": 61
  }
}
The response format is identical to OpenAI’s API, making it easy to switch between providers or use LLM Gateway as a drop-in replacement.
6

View your analytics

Check the dashboard to see your request logs and usage metrics:
  1. Navigate to Activity in your project
  2. View token usage, costs, and response times
  3. Filter by date range, model, or API key
  4. Export data for further analysis
Analytics are updated in real-time. You should see your test request appear immediately in the activity feed.

What’s next?

Now that you’ve made your first request, explore these features:

Enable caching

Reduce costs and latency by caching responses

Set up guardrails

Add content filters and safety policies

Use with SDKs

Integrate with OpenAI SDK, LangChain, or other frameworks

Try the playground

Test models interactively in the browser

Authentication options

LLM Gateway supports two authentication methods:
Authorization: Bearer llm_sk_...

x-api-key header

x-api-key: llm_sk_...
Both methods work identically. Use whichever is more convenient for your setup.

Common issues

  • Check that your API key is correct
  • Ensure you’re using the Authorization: Bearer header
  • Verify your API key hasn’t expired or been revoked
  • Add at least one provider key in your project settings
  • Ensure the provider key is validated (green checkmark)
  • Check that you have credits or an active provider key for the requested model
  • Verify the model name is correct (e.g., gpt-4o, claude-3-5-sonnet-20241022)
  • Check that your provider key supports the requested model
  • Use GET /v1/models to list available models
  • You may have hit your usage limits (check project settings)
  • Your provider may be rate-limiting you (check provider dashboard)
  • Consider upgrading your plan or adding more provider keys

Need help?

Build docs developers (and LLMs) love