Overview
LLM Gateway uses API keys for authentication and authorization. Each API key can have usage limits, IAM rules, and belongs to a specific project.
Creating API Keys
Create API keys via the dashboard or API:
- Navigate to your project
- Click “API Keys” in the sidebar
- Click “Create API Key”
- Enter a description
- (Optional) Set a usage limit
- Click “Create”
curl https://api.llmgateway.io/keys/api \
-X POST \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"description": "Production API Key",
"projectId": "proj_abc123",
"usageLimit": "1000000"
}'
The full API key is only shown once when created. Store it securely - you won’t be able to retrieve it later.
API keys follow this format:
- Development:
llmgdev_ + 40 random characters
- Production:
llmgtwy_ + 40 random characters
apps/api/src/routes/keys-api.ts
const prefix = process.env.NODE_ENV === "development" ? "llmgdev_" : "llmgtwy_";
const token = prefix + shortid(40);
Using API Keys
Include your API key in requests:
curl https://api.llmgateway.io/v1/chat/completions \
-H "Authorization: Bearer llmgtwy_abc123..." \
-d '{...}'
from openai import OpenAI
client = OpenAI(
base_url="https://api.llmgateway.io/v1",
api_key="llmgtwy_abc123..."
)
Listing API Keys
Retrieve all API keys for a project:
curl "https://api.llmgateway.io/keys/api?projectId=proj_abc123" \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
Response:
{
"apiKeys": [
{
"id": "key_xyz789",
"description": "Production API Key",
"maskedToken": "llmgtwy_abc...xyz",
"status": "active",
"usage": "45000",
"usageLimit": "1000000",
"createdAt": "2024-01-15T10:30:00Z",
"createdBy": "user_123",
"creator": {
"id": "user_123",
"name": "John Doe",
"email": "[email protected]"
}
}
],
"planLimits": {
"currentCount": 3,
"maxKeys": 20,
"plan": "pro"
},
"userRole": "admin"
}
Usage Limits
Set spending limits per API key:
curl https://api.llmgateway.io/keys/api/limit/key_xyz789 \
-X PATCH \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"usageLimit": "500000"
}'
Usage limits are measured in total tokens (input + output). Requests are blocked when the limit is reached.
Check Current Usage
Usage is tracked automatically:
apps/gateway/src/chat/chat.ts
if (apiKey.usageLimit && Number(apiKey.usage) >= Number(apiKey.usageLimit)) {
throw new HTTPException(401, {
message: "Unauthorized: LLMGateway API key reached its usage limit."
});
}
API Key Status
API keys can have three statuses:
- active - Key is valid and can be used
- inactive - Key is disabled (can be re-enabled)
- deleted - Key is soft-deleted (cannot be recovered)
Update Status
curl https://api.llmgateway.io/keys/api/key_xyz789 \
-X PATCH \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"status": "inactive"
}'
Delete API Key
curl https://api.llmgateway.io/keys/api/key_xyz789 \
-X DELETE \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
Deleted API keys cannot be recovered. All active requests using the key will immediately fail.
IAM Rules
Control what models and providers each API key can access:
apps/api/src/routes/keys-api.ts
interface IamRule {
ruleType:
| "allow_models"
| "deny_models"
| "allow_pricing"
| "deny_pricing"
| "allow_providers"
| "deny_providers";
ruleValue: {
models?: string[]; // e.g., ["gpt-4o", "claude-3-5-sonnet-20241022"]
providers?: string[]; // e.g., ["openai", "anthropic"]
pricingType?: "free" | "paid";
maxInputPrice?: number; // per 1M tokens
maxOutputPrice?: number; // per 1M tokens
};
status: "active" | "inactive";
}
Create IAM Rule
Allow Specific Models
Deny Expensive Models
Block Provider
curl https://api.llmgateway.io/keys/api/key_xyz789/iam \
-X POST \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"ruleType": "allow_models",
"ruleValue": {
"models": ["gpt-4o", "gpt-4o-mini"]
},
"status": "active"
}'
curl https://api.llmgateway.io/keys/api/key_xyz789/iam \
-X POST \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"ruleType": "allow_pricing",
"ruleValue": {
"maxInputPrice": 5.0,
"maxOutputPrice": 15.0
},
"status": "active"
}'
curl https://api.llmgateway.io/keys/api/key_xyz789/iam \
-X POST \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"ruleType": "deny_providers",
"ruleValue": {
"providers": ["aws-bedrock"]
},
"status": "active"
}'
List IAM Rules
curl "https://api.llmgateway.io/keys/api/key_xyz789/iam" \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
Update IAM Rule
curl https://api.llmgateway.io/keys/api/key_xyz789/iam/rule_abc \
-X PATCH \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"status": "inactive"
}'
Delete IAM Rule
curl https://api.llmgateway.io/keys/api/key_xyz789/iam/rule_abc \
-X DELETE \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
IAM Validation
IAM rules are enforced on every request:
apps/gateway/src/lib/iam.ts
export async function validateModelAccess(
apiKeyId: string,
modelId: string,
requestedProvider: string | undefined,
modelInfo: ModelDefinition
): Promise<{
allowed: boolean;
reason?: string;
allowedProviders?: string[];
}> {
// Check all IAM rules for this API key
const iamRules = await db.query.apiKeyIamRule.findMany({
where: { apiKeyId: { eq: apiKeyId }, status: { eq: "active" } }
});
// Apply allow_models, deny_models, allow_pricing, etc.
// Returns list of allowed providers or error
}
Plan Limits
| Plan | Max API Keys per Project |
|---|
| Free | 5 |
| Pro | 20 |
| Enterprise | Unlimited |
Permissions
API key management permissions by role:
| Action | Developer | Admin | Owner |
|---|
| View all keys | ✅ | ✅ | ✅ |
| Create key | ✅ | ✅ | ✅ |
| Update own key | ✅ | ✅ | ✅ |
| Update any key | ❌ | ✅ | ✅ |
| Delete own key | ✅ | ✅ | ✅ |
| Delete any key | ❌ | ✅ | ✅ |
| Manage IAM rules | Own keys | All keys | All keys |
Best Practices
Rotate Keys Regularly
Create new keys and delete old ones every 90 days
Use Separate Keys
Create different keys for dev, staging, and production
Set Usage Limits
Always set usage limits to prevent unexpected costs
Use IAM Rules
Restrict keys to specific models/providers
Use descriptive names for API keys (e.g., “Production Backend”, “Staging Frontend”) to track usage more easily.
Security
- API keys are hashed before storage
- Full keys are never logged or displayed after creation
- Failed authentication attempts are tracked
- Compromised keys can be instantly revoked