Skip to main content
The BookMe API implements per-IP rate limiting to prevent abuse and ensure fair usage across all clients. Different endpoint groups have different rate limits based on their sensitivity and typical usage patterns.

Rate Limit Configuration

The API uses token bucket algorithm for rate limiting, implemented with golang.org/x/time/rate.

OAuth Endpoints

Authentication endpoints have stricter rate limits to prevent brute force attacks:
Rate Limit: 5 requests per 12 seconds per IP addressBurst Size: 5 requests
Applies to:
  • GET /oauth/login
  • GET /oauth/callback
Configuration:
oauthLimiter := middleware.NewRateLimiter(
    rate.Every(12*time.Second), // 1 request every 12 seconds
    5,                           // burst of 5 requests
    false                        // don't trust proxy headers
)
Source: internal/api/routes.go:28
This allows a maximum of 5 login attempts in quick succession, then enforces 12-second intervals between subsequent attempts.

API Endpoints

General API endpoints have more generous rate limits for normal application usage:
Rate Limit: 30 requests per 2 seconds per IP addressBurst Size: 30 requests
Applies to:
  • POST /api/v1/reservations - Create reservation
  • GET /api/v1/reservations - Get reservations
  • DELETE /api/v1/reservations/{id} - Cancel reservation
Configuration:
apiLimiter := middleware.NewRateLimiter(
    rate.Every(2*time.Second), // 1 request every 2 seconds
    30,                        // burst of 30 requests
    false                      // don't trust proxy headers
)
Source: internal/api/routes.go:29
This configuration allows clients to make up to 30 requests immediately, then sustains a rate of 1 request every 2 seconds (0.5 requests/second).

Health Check Endpoint

The health check endpoint (GET /api/v1/health) is not rate limited to allow unrestricted monitoring.
Source: internal/api/routes.go:35

How Rate Limiting Works

Token Bucket Algorithm

The API uses the token bucket algorithm:
  1. Each IP address gets a virtual “bucket” with a maximum capacity (burst size)
  2. Tokens are added to the bucket at a constant rate
  3. Each request consumes one token
  4. If no tokens are available, the request is rejected with 429 Too Many Requests

IP Address Detection

The rate limiter identifies clients by their IP address:
// Priority order:
1. r.RemoteAddr (direct connection IP)
2. X-Forwarded-For header (if trustProxy=true)
3. X-Real-IP header (if trustProxy=true)
Source: internal/middleware/ratelimit.go:93-111
Current Configuration: trustProxy=falseThe API does not trust proxy headers by default. All rate limiting is based on RemoteAddr.If deploying behind a reverse proxy (nginx, Cloudflare, etc.), consider enabling trustProxy and properly configuring proxy headers.

Visitor Cleanup

To prevent memory leaks, the rate limiter automatically cleans up stale visitor records:
  • Cleanup Interval: Every 3 minutes
  • Stale Threshold: Visitors inactive for > 3 minutes are removed
Source: internal/middleware/ratelimit.go:40-62

Rate Limit Responses

When Rate Limit is Exceeded

When a client exceeds the rate limit, the API returns: Status Code: 429 Too Many Requests Headers:
Retry-After: 6
Response Body:
Rate limit exceeded. Please try again later.
Source: internal/middleware/ratelimit.go:81-85
curl -i http://localhost:8080/oauth/login

Retry-After Header

The Retry-After header is always set to 6 seconds, providing a consistent hint for when to retry the request.
Clients should respect the Retry-After header and implement exponential backoff for better reliability.

Best Practices

Handling Rate Limits

1

Check Response Status

Always check for 429 Too Many Requests status code in your error handling
if (response.status === 429) {
  const retryAfter = response.headers.get('Retry-After');
  // Wait and retry
}
2

Respect Retry-After

Use the Retry-After header value to determine when to retry
const retryAfterSeconds = parseInt(
  response.headers.get('Retry-After')
);
await new Promise(resolve => 
  setTimeout(resolve, retryAfterSeconds * 1000)
);
3

Implement Exponential Backoff

For multiple consecutive rate limit errors, increase wait time exponentially
let backoff = 1;
while (retries < maxRetries) {
  const response = await fetch(url, options);
  if (response.status === 429) {
    await sleep(backoff * 1000);
    backoff *= 2; // Exponential increase
    retries++;
  } else {
    break;
  }
}
4

Batch Requests

When possible, batch multiple operations to stay within rate limits

Rate Limit Monitoring

When rate limits are exceeded, the server logs warnings:
WARN rate limit exceeded ip=192.168.1.1 method=GET path=/oauth/login
Source: internal/middleware/ratelimit.go:82
Monitor these logs to identify clients that frequently hit rate limits and may need optimization.

Example: Client Implementation

async function makeRequest(url, options, maxRetries = 3) {
  let retries = 0;
  let backoff = 1;
  
  while (retries <= maxRetries) {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      const retryAfter = parseInt(
        response.headers.get('Retry-After') || '6'
      );
      
      if (retries === maxRetries) {
        throw new Error('Max retries exceeded');
      }
      
      console.log(`Rate limited. Retrying after ${retryAfter * backoff}s`);
      await new Promise(resolve => 
        setTimeout(resolve, retryAfter * backoff * 1000)
      );
      
      backoff *= 2;
      retries++;
      continue;
    }
    
    return response;
  }
}

// Usage
const response = await makeRequest(
  'http://localhost:8080/api/v1/reservations',
  {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${token}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify(reservationData)
  }
);

Rate Limit Summary

Endpoint GroupRateBurstIntervalRequests/Minute
OAuth endpoints5/12s512 seconds~25
API endpoints30/2s302 seconds~30
Health checkUnlimitedN/AN/AUnlimited
All rate limits are applied per IP address and are independent of authentication status.

Build docs developers (and LLMs) love