Overview
The API implements rate limiting to prevent abuse and ensure fair usage across all clients. Rate limits are applied per IP address and vary by endpoint type.
Rate Limit Policies
The API uses three different rate limit policies:
General Rate Limiter
Applied to most API endpoints:
- Limit: 100 requests per 15 minutes
- Window: 15 minutes (900,000 milliseconds)
- Applied to: General API endpoints
Location: /home/daytona/workspace/source/src/shared/middlewares/rate-limit.middleware.ts:7
Authentication Rate Limiter
Stricter limits for authentication endpoints to prevent brute force attacks:
- Limit: 5 requests per 15 minutes
- Window: 15 minutes (900,000 milliseconds)
- Applied to:
/api/auth/* endpoints
Location: /home/daytona/workspace/source/src/shared/middlewares/rate-limit.middleware.ts:22
Creation Rate Limiter
Moderate limits for resource creation endpoints to prevent spam:
- Limit: 20 requests per 15 minutes
- Window: 15 minutes (900,000 milliseconds)
- Applied to: POST endpoints for creating reviews, photos, favorites, etc.
Location: /home/daytona/workspace/source/src/shared/middlewares/rate-limit.middleware.ts:37
The API includes standardized rate limit information in response headers:
| Header | Description | Example |
|---|
RateLimit-Limit | Maximum number of requests allowed in the window | 100 |
RateLimit-Remaining | Number of requests remaining in the current window | 95 |
RateLimit-Reset | Time when the rate limit window resets (Unix timestamp) | 1709553600 |
Example Response
HTTP/1.1 200 OK
RateLimit-Limit: 100
RateLimit-Remaining: 95
RateLimit-Reset: 1709553600
Content-Type: application/json
{
"success": true,
"data": [...]
}
The API uses standard headers (RateLimit-*) instead of legacy headers (X-RateLimit-*).
Rate Limit Exceeded Response
When you exceed the rate limit, the API returns a 429 Too Many Requests status:
General Endpoints
{
"success": false,
"error": "Too many requests from this IP, please try again later"
}
Authentication Endpoints
{
"success": false,
"error": "Too many authentication attempts, please try again later"
}
Creation Endpoints
{
"success": false,
"error": "Too many creation requests, please slow down"
}
Status Code: 429 Too Many Requests
Implementation Details
Rate limiting is implemented using the express-rate-limit package:
import rateLimit from "express-rate-limit";
export const generalLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // Max 100 requests per window
message: {
success: false,
error: "Too many requests from this IP, please try again later",
},
standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
legacyHeaders: false, // Disable the `X-RateLimit-*` headers
});
Location: /home/daytona/workspace/source/src/shared/middlewares/rate-limit.middleware.ts:7
Rate Limit Strategy
Per-IP Tracking
Rate limits are tracked per IP address. This means:
- Multiple users behind the same NAT/proxy share the same limit
- Each unique IP address has its own rate limit counter
- Limits reset independently for each IP
Sliding Window
The API uses a sliding window approach:
- A 15-minute window starts with your first request
- Each request increments the counter for that window
- After 15 minutes, the window resets and the counter starts over
Handling Rate Limits in Your Application
const response = await fetch('/api/routes');
const remaining = response.headers.get('RateLimit-Remaining');
const reset = response.headers.get('RateLimit-Reset');
if (remaining < 10) {
console.warn(`Only ${remaining} requests remaining`);
console.warn(`Resets at ${new Date(reset * 1000)}`);
}
Implement Retry Logic
async function fetchWithRetry(url, options = {}, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url, options);
if (response.status === 429) {
const resetTime = response.headers.get('RateLimit-Reset');
const waitTime = (resetTime * 1000) - Date.now();
console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
Display Rate Limit Info to Users
const response = await fetch('/api/routes/ruta-1/reviews', {
method: 'POST',
body: JSON.stringify(reviewData)
});
if (response.status === 429) {
const resetTime = response.headers.get('RateLimit-Reset');
const resetDate = new Date(resetTime * 1000);
alert(`You've made too many requests. Please try again at ${resetDate.toLocaleTimeString()}`);
}
Best Practices
1. Cache Responses
Reduce API calls by caching data that doesn’t change frequently:
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
async function fetchRoutes() {
const cached = cache.get('routes');
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const response = await fetch('/api/routes');
const data = await response.json();
cache.set('routes', { data, timestamp: Date.now() });
return data;
}
2. Batch Requests
Combine multiple operations into fewer API calls when possible.
Track remaining requests to avoid hitting limits unexpectedly:
function logRateLimitInfo(response) {
const limit = response.headers.get('RateLimit-Limit');
const remaining = response.headers.get('RateLimit-Remaining');
const reset = response.headers.get('RateLimit-Reset');
console.log(`Rate Limit: ${remaining}/${limit} remaining (resets at ${new Date(reset * 1000).toLocaleTimeString()})`);
}
4. Handle 429 Errors Gracefully
Always implement proper error handling for rate limit responses:
if (response.status === 429) {
// Show user-friendly message
// Implement exponential backoff
// Log for monitoring
}
Endpoint-Specific Limits
| Endpoint Pattern | Rate Limiter | Limit |
|---|
/api/auth/* | Authentication | 5 requests / 15 min |
POST /api/routes/:id/reviews | Creation | 20 requests / 15 min |
POST /api/favorites | Creation | 20 requests / 15 min |
POST /api/photos | Creation | 20 requests / 15 min |
| All other endpoints | General | 100 requests / 15 min |
Rate limits are subject to change based on usage patterns and system capacity. Monitor the rate limit headers in your application to handle changes gracefully.