Skip to main content
The Cal.com Platform API implements rate limiting to ensure fair usage and system stability. Rate limits vary based on your authentication method and can be customized for specific endpoints.

Default Rate Limits

Rate limits are applied per authentication method over a 60-second window:

API Key

120 requests per 60 seconds

OAuth Client

500 requests per 60 seconds

Access Token

500 requests per 60 seconds

Unauthenticated (IP)

120 requests per 60 seconds

Rate Limit Headers

Every API response includes rate limit information in the headers:
X-RateLimit-Limit-Default: 120
X-RateLimit-Remaining-Default: 115
X-RateLimit-Reset-Default: 1710505200

Header Description

HeaderDescription
X-RateLimit-Limit-{Name}Maximum requests allowed in the time window
X-RateLimit-Remaining-{Name}Requests remaining in current window
X-RateLimit-Reset-{Name}Unix timestamp when the rate limit resets

Example Response Headers

HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit-Default: 120
X-RateLimit-Remaining-Default: 115
X-RateLimit-Reset-Default: 1710505200

Rate Limit Tiers

Rate limits are tracked separately for each authentication method:

API Key Rate Limit

Identifier: api_key_{hashed_key}
curl -X GET https://api.cal.com/v2/bookings \
  -H "Authorization: Bearer cal_live_xxxxx" \
  -I

# Response Headers
X-RateLimit-Limit-Default: 120
X-RateLimit-Remaining-Default: 119
X-RateLimit-Reset-Default: 1710505260
Default Limits:
  • Limit: 120 requests
  • TTL: 60 seconds
  • Block Duration: 60 seconds

OAuth Client Rate Limit

Identifier: oauth_client_{hashed_client_id} When using the X-Cal-Client-ID header:
curl -X GET https://api.cal.com/v2/bookings \
  -H "X-Cal-Client-ID: client_123" \
  -I

# Response Headers
X-RateLimit-Limit-Default: 500
X-RateLimit-Remaining-Default: 499
X-RateLimit-Reset-Default: 1710505260
Default Limits:
  • Limit: 500 requests
  • TTL: 60 seconds
  • Block Duration: 60 seconds

Access Token Rate Limit

Identifier: access_token_{hashed_token} When using OAuth access tokens:
curl -X GET https://api.cal.com/v2/bookings \
  -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." \
  -I

# Response Headers
X-RateLimit-Limit-Default: 500
X-RateLimit-Remaining-Default: 499
X-RateLimit-Reset-Default: 1710505260
Default Limits:
  • Limit: 500 requests
  • TTL: 60 seconds
  • Block Duration: 60 seconds

IP-Based Rate Limit

Identifier: ip_{hashed_ip} For unauthenticated requests or as a fallback:
curl -X GET https://api.cal.com/v2/public-endpoint \
  -I

# Response Headers
X-RateLimit-Limit-Default: 120
X-RateLimit-Remaining-Default: 119
X-RateLimit-Reset-Default: 1710505260
Default Limits:
  • Limit: 120 requests
  • TTL: 60 seconds
  • Block Duration: 60 seconds

Custom Rate Limits

Certain API keys can have custom rate limits configured. When custom limits are applied, you’ll see additional headers:
X-RateLimit-Limit-Default: 120
X-RateLimit-Remaining-Default: 100
X-RateLimit-Reset-Default: 1710505260

X-RateLimit-Limit-Custom: 1000
X-RateLimit-Remaining-Custom: 950
X-RateLimit-Reset-Custom: 1710505260

Custom Rate Limit Example

An API key with multiple rate limit tiers:
[
  {
    "name": "default",
    "limit": 120,
    "ttl": 60000,
    "blockDuration": 60000
  },
  {
    "name": "burst",
    "limit": 10,
    "ttl": 1000,
    "blockDuration": 5000
  }
]
This configuration allows:
  • 120 requests per minute (default tier)
  • 10 requests per second (burst tier)

Endpoint-Specific Rate Limits

Some endpoints may have custom rate limits using the @Throttle decorator:
@Throttle({
  name: "booking-creation",
  limit: 10,
  ttl: 60000, // 60 seconds
  blockDuration: 300000 // 5 minutes
})
@Post('/bookings')
async createBooking() {
  // Endpoint logic
}
When an endpoint has a custom limit, you’ll see both headers:
X-RateLimit-Limit-Default: 120
X-RateLimit-Remaining-Default: 115
X-RateLimit-Reset-Default: 1710505260

X-RateLimit-Limit-Booking-creation: 10
X-RateLimit-Remaining-Booking-creation: 8
X-RateLimit-Reset-Booking-creation: 1710505200

Rate Limit Exceeded

When you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
{
  "status": "error",
  "error": {
    "message": "Too many requests. Please try again later.",
    "code": "RATE_LIMIT_EXCEEDED"
  }
}
Response Headers:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit-Default: 120
X-RateLimit-Remaining-Default: 0
X-RateLimit-Reset-Default: 1710505260
Retry-After: 45
The Retry-After header indicates how many seconds to wait before retrying.

Rate Limit Storage

Rate limits are tracked using Redis with the following storage pattern:
rate_limit:{tracker}:{limit}:{ttl}

Example Keys

rate_limit:api_key_abc123:120:60000
rate_limit:oauth_client_xyz789:500:60000
rate_limit:access_token_token123:500:60000
rate_limit:ip_192.168.1.1:120:60000

Best Practices

Always check X-RateLimit-Remaining headers to track your usage:
const response = await fetch('https://api.cal.com/v2/bookings', {
  headers: { 'Authorization': 'Bearer YOUR_TOKEN' }
});

const remaining = response.headers.get('X-RateLimit-Remaining-Default');
const reset = response.headers.get('X-RateLimit-Reset-Default');

if (remaining < 10) {
  console.warn('Approaching rate limit!');
}
When you receive a 429 response, implement exponential backoff:
async function makeRequestWithRetry(url, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url);
    
    if (response.status === 429) {
      const retryAfter = response.headers.get('Retry-After');
      const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, i) * 1000;
      
      console.log(`Rate limited. Retrying in ${delay}ms...`);
      await new Promise(resolve => setTimeout(resolve, delay));
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retries exceeded');
}
OAuth clients and access tokens have higher rate limits (500 vs 120 requests per minute):
  • Use API keys for low-volume integrations
  • Use OAuth for production applications with higher traffic
  • Consider OAuth for applications with multiple users
Reduce API calls by caching responses:
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute

async function getCachedBookings() {
  const cached = cache.get('bookings');
  
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }
  
  const response = await fetch('https://api.cal.com/v2/bookings');
  const data = await response.json();
  
  cache.set('bookings', {
    data,
    timestamp: Date.now()
  });
  
  return data;
}
Instead of making multiple requests, use list endpoints with filters:
# Bad: Multiple requests
GET /v2/bookings/1
GET /v2/bookings/2
GET /v2/bookings/3

# Good: Single request with pagination
GET /v2/bookings?limit=100

Rate Limit Implementation

The Platform API uses a custom throttler guard (CustomThrottlerGuard) that:
  1. Identifies the request source (API key, OAuth client, access token, or IP)
  2. Retrieves rate limits from database or uses defaults
  3. Tracks request count in Redis
  4. Applies multiple rate limit tiers (default + custom)
  5. Blocks requests when any limit is exceeded
  6. Returns headers with current rate limit status

Implementation Details

// From CustomThrottlerGuard
const defaultLimits = {
  apiKey: 120,        // requests per 60 seconds
  oauthClient: 500,   // requests per 60 seconds
  accessToken: 500,   // requests per 60 seconds
  default: 120        // requests per 60 seconds (IP-based)
};

const defaultTTL = 60000;          // 60 seconds
const defaultBlockDuration = 60000; // 60 seconds

Webhook Rate Limits

Webhook deliveries are not subject to the same rate limits, but they have their own delivery constraints:
  • Maximum of 5 delivery attempts per webhook event
  • Exponential backoff between retries (1s, 2s, 4s, 8s, 16s)
  • 30-second timeout per delivery attempt
See Webhooks for more details.

Environment Variables

Rate limits can be configured via environment variables:
RATE_LIMIT_DEFAULT_TTL_MS=60000
RATE_LIMIT_DEFAULT_LIMIT_API_KEY=120
RATE_LIMIT_DEFAULT_LIMIT_OAUTH_CLIENT=500
RATE_LIMIT_DEFAULT_LIMIT_ACCESS_TOKEN=500
RATE_LIMIT_DEFAULT_LIMIT=120
RATE_LIMIT_DEFAULT_BLOCK_DURATION_MS=60000

Upgrading Rate Limits

For enterprise customers or high-volume applications, custom rate limits can be configured:
  1. Contact [email protected]
  2. Discuss your usage requirements
  3. Receive custom API key with higher limits
  4. Custom limits are stored in the database and cached in Redis

Example Custom Configuration

Enterprise API key with custom limits:
[
  {
    "name": "default",
    "limit": 1000,
    "ttl": 60000,
    "blockDuration": 60000
  },
  {
    "name": "burst",
    "limit": 50,
    "ttl": 1000,
    "blockDuration": 5000
  },
  {
    "name": "daily",
    "limit": 100000,
    "ttl": 86400000,
    "blockDuration": 3600000
  }
]
This provides:
  • 1,000 requests per minute
  • 50 requests per second
  • 100,000 requests per day

Testing Rate Limits

Test your rate limit handling:
#!/bin/bash

# Send 125 requests (exceeds 120 limit)
for i in {1..125}; do
  response=$(curl -s -o /dev/null -w "%{http_code}" \
    -H "Authorization: Bearer cal_test_xxxxx" \
    https://api.cal.com/v2/bookings)
  
  echo "Request $i: $response"
  
  if [ "$response" = "429" ]; then
    echo "Rate limit exceeded after $i requests"
    break
  fi
done

Monitoring Rate Limits

Track your API usage with monitoring:
class RateLimitMonitor {
  constructor() {
    this.metrics = {
      requests: 0,
      rateLimited: 0,
      avgRemaining: []
    };
  }
  
  async makeRequest(url, options) {
    this.metrics.requests++;
    
    const response = await fetch(url, options);
    
    // Track rate limit headers
    const remaining = parseInt(
      response.headers.get('X-RateLimit-Remaining-Default') || '0'
    );
    this.metrics.avgRemaining.push(remaining);
    
    if (response.status === 429) {
      this.metrics.rateLimited++;
    }
    
    return response;
  }
  
  getStats() {
    const avgRemaining = this.metrics.avgRemaining.reduce(
      (a, b) => a + b, 0
    ) / this.metrics.avgRemaining.length;
    
    return {
      totalRequests: this.metrics.requests,
      rateLimited: this.metrics.rateLimited,
      rateLimitedPercent: (this.metrics.rateLimited / this.metrics.requests) * 100,
      avgRemaining: Math.round(avgRemaining)
    };
  }
}

Next Steps

Authentication

Learn about authentication methods

Webhooks

Set up event notifications

Best Practices

Optimize your API usage

Error Handling

Handle rate limit errors

Build docs developers (and LLMs) love