Skip to main content
Projects help you organize your API keys, settings, and usage by application or environment. Each project belongs to an organization and can have its own configuration.

Create Project

name
string
required
Project name (1-255 characters)
organizationId
string
required
ID of the organization this project belongs to
cachingEnabled
boolean
default:false
Enable response caching
cacheDurationSeconds
number
default:60
Cache duration in seconds (10 - 31536000)
mode
'api-keys' | 'credits' | 'hybrid'
default:"hybrid"
Project mode:
  • api-keys: Use your own provider keys only
  • credits: Use LLM Gateway credits only
  • hybrid: Try provider keys first, fallback to credits
project
object
The created project object
curl -X POST https://api.llmgateway.io/projects \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Production",
    "organizationId": "org_abc123",
    "cachingEnabled": true,
    "cacheDurationSeconds": 300,
    "mode": "hybrid"
  }'
{
  "project": {
    "id": "proj_xyz789",
    "name": "Production",
    "organizationId": "org_abc123",
    "cachingEnabled": true,
    "cacheDurationSeconds": 300,
    "mode": "hybrid",
    "status": "active",
    "createdAt": "2024-01-15T10:30:00Z",
    "updatedAt": "2024-01-15T10:30:00Z"
  }
}
Each organization has a limit of 10 projects by default. Contact support to increase this limit.

Get Project

Retrieve a single project by ID.
id
string
required
Project ID
curl https://api.llmgateway.io/projects/proj_xyz789 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
  "project": {
    "id": "proj_xyz789",
    "name": "Production",
    "organizationId": "org_abc123",
    "cachingEnabled": true,
    "cacheDurationSeconds": 300,
    "mode": "hybrid",
    "status": "active",
    "createdAt": "2024-01-15T10:30:00Z",
    "updatedAt": "2024-01-15T10:30:00Z"
  }
}

Update Project

Update project settings. All fields are optional.
id
string
required
Project ID
name
string
New project name (1-255 characters)
cachingEnabled
boolean
Enable or disable caching
cacheDurationSeconds
number
Cache duration in seconds (10 - 31536000)
mode
'api-keys' | 'credits' | 'hybrid'
Project mode
curl -X PATCH https://api.llmgateway.io/projects/proj_xyz789 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Production v2",
    "cachingEnabled": false
  }'
{
  "message": "Project settings updated successfully",
  "project": {
    "id": "proj_xyz789",
    "name": "Production v2",
    "organizationId": "org_abc123",
    "cachingEnabled": false,
    "cacheDurationSeconds": 300,
    "mode": "hybrid",
    "status": "active",
    "createdAt": "2024-01-15T10:30:00Z",
    "updatedAt": "2024-01-16T14:20:00Z"
  }
}

Delete Project

Soft-delete a project. The project will be marked as deleted.
id
string
required
Project ID
Only organization owners can delete projects. All API keys in the project will also be deleted.
curl -X DELETE https://api.llmgateway.io/projects/proj_xyz789 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
  "message": "Project deleted successfully"
}

Project Modes

Projects support three different modes for handling LLM requests:

API Keys Mode

{
  "mode": "api-keys"
}
Use only your configured provider keys. Requests will fail if:
  • No provider key is configured for the requested provider
  • The provider key is inactive or deleted
  • The provider is unavailable
Best for:
  • Organizations with existing provider relationships
  • Maximum control over which providers are used
  • Compliance requirements

Credits Mode

{
  "mode": "credits"
}
Use only LLM Gateway credits. All requests are routed through Gateway-managed providers. Best for:
  • Getting started quickly
  • Pay-as-you-go pricing
  • Access to free models
  • Multi-provider routing without managing keys
{
  "mode": "hybrid"
}
Try your provider keys first, automatically fallback to credits if:
  • Provider key is not configured
  • Provider returns an error
  • Provider is unavailable
Best for:
  • Maximum reliability and uptime
  • Cost optimization (use your keys when available)
  • Seamless failover
Hybrid mode provides the best balance of cost control and reliability. It’s the recommended mode for production applications.

Caching

Enable response caching to reduce costs and improve latency for repeated requests.

How It Works

  1. When a request is made, the Gateway checks if an identical request was cached recently
  2. If found, the cached response is returned immediately (no provider call)
  3. If not found, the request is sent to the provider and the response is cached

Cache Keys

Requests are cached based on:
  • Model name
  • Messages/prompt
  • All parameters (temperature, max_tokens, etc.)
  • Tools and tool_choice (if applicable)

Configuration

cachingEnabled
boolean
Enable or disable caching for the project
cacheDurationSeconds
number
How long to cache responses (10 seconds to 1 year)
curl -X PATCH https://api.llmgateway.io/projects/proj_xyz789 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "cachingEnabled": true,
    "cacheDurationSeconds": 3600
  }'
Cached requests are marked with "cached": true in activity logs and don’t count toward your usage/costs.

Error Responses

{
  "message": "You have reached the limit of 10 projects. Contact us at [email protected] to unlock more."
}

Build docs developers (and LLMs) love