Prerequisites
Before you begin, you’ll need:- An LLM Gateway account (sign up at llmgateway.io)
- curl or your favorite HTTP client
- An OpenAI or Anthropic API key (optional, for testing)
Get started
Create your account
Visit llmgateway.io and sign up for a free account. You can use:
- Email and password
- GitHub OAuth
- Google OAuth
- Passkeys (WebAuthn)
Create a project
Projects help you organize your API keys and track usage separately for different applications.
- Click New Project in the dashboard
- Enter a project name (e.g., “My First Project”)
- Select a project mode:
- API Keys - Use your own provider API keys
- Credits - Use pre-paid LLM Gateway credits
- Hybrid - Use both API keys and credits
- Click Create Project
For this quickstart, select “API Keys” mode. You’ll add your provider keys in the next step.
Add a provider key
To route requests through LLM Gateway, you need to add at least one provider API key.
LLM Gateway will validate your key automatically. Once validated, you’re ready to make requests!
- In your project dashboard, navigate to Provider Keys
- Click Add Provider Key
- Select a provider (e.g., OpenAI)
- Paste your OpenAI API key
- Click Save
Where do I get provider API keys?
Where do I get provider API keys?
- OpenAI: platform.openai.com/api-keys
- Anthropic: console.anthropic.com/settings/keys
- Google: console.cloud.google.com
Generate an API key
Generate an LLM Gateway API key to authenticate your requests.
- Navigate to API Keys in your project
- Click Create API Key
- Give it a name (e.g., “Development Key”)
- (Optional) Set usage limits or IAM rules
- Click Create
- Copy the API key immediately - you won’t be able to see it again!
Make your first request
Now you’re ready to make your first LLM request through the gateway!Replace
The response format is identical to OpenAI’s API, making it easy to switch between providers or use LLM Gateway as a drop-in replacement.
cURL
YOUR_LLM_GATEWAY_API_KEY with the API key you created in the previous step.Expected response
Expected response
View your analytics
Check the dashboard to see your request logs and usage metrics:
- Navigate to Activity in your project
- View token usage, costs, and response times
- Filter by date range, model, or API key
- Export data for further analysis
Analytics are updated in real-time. You should see your test request appear immediately in the activity feed.
What’s next?
Now that you’ve made your first request, explore these features:Enable caching
Reduce costs and latency by caching responses
Set up guardrails
Add content filters and safety policies
Use with SDKs
Integrate with OpenAI SDK, LangChain, or other frameworks
Try the playground
Test models interactively in the browser
Authentication options
LLM Gateway supports two authentication methods:Bearer token (recommended)
x-api-key header
Common issues
401 Unauthorized error
401 Unauthorized error
Provider key not found
Provider key not found
- Add at least one provider key in your project settings
- Ensure the provider key is validated (green checkmark)
- Check that you have credits or an active provider key for the requested model
Model not found
Model not found
- Verify the model name is correct (e.g.,
gpt-4o,claude-3-5-sonnet-20241022) - Check that your provider key supports the requested model
- Use
GET /v1/modelsto list available models
Rate limit exceeded
Rate limit exceeded
- You may have hit your usage limits (check project settings)
- Your provider may be rate-limiting you (check provider dashboard)
- Consider upgrading your plan or adding more provider keys
Need help?
- Check the API Reference for detailed endpoint documentation
- Read the Integration Guides for SDK-specific instructions
- Visit the GitHub repository to report issues or contribute