Overview
Shipr includes a built-in rate limiting solution (src/lib/rate-limit.ts) that implements a sliding window algorithm with in-memory storage. It’s ideal for single-instance deployments and Vercel serverless functions.
For multi-instance or high-traffic production deployments, consider using Redis-based solutions like Upstash Rate Limit.
Implementation
The rate limiter is located atsrc/lib/rate-limit.ts:
How It Works
Sliding Window Algorithm
- Timestamp tracking: Each request timestamp is stored in a bucket identified by a unique key (e.g., user ID, IP address)
- Window filtering: On each check, timestamps older than the current window are removed
- Limit enforcement: If the bucket has too many timestamps, the request is rejected
- Automatic cleanup: Periodic cleanup prevents memory leaks by removing empty buckets
Key Features
- Per-key limits: Different limits for different users/IPs
- Accurate windowing: True sliding window (not fixed buckets)
- Memory-efficient: Automatic cleanup when cache exceeds 10,000 entries
- Reset timestamps: Clients know exactly when they can retry
Basic Usage
Create a rate limiter in your API route:Real-World Examples
Health Check Endpoint
src/app/api/health/route.ts - Simple rate limiting by IP:
Email API with User Authentication
src/app/api/email/route.ts - Rate limiting by IP for anonymous requests:
Chat API with Composite Keys
src/app/api/chat/route.ts - Rate limiting by user ID + IP combination:
Rate Limit Headers
Follow standard HTTP rate limit headers:Common Patterns
Different Limits for Different Endpoints
IP Extraction from Headers
User-Based Rate Limiting
Configuration Examples
Limitations
Single-Instance Only
The in-memory implementation doesn’t share state across multiple server instances. Each instance maintains its own rate limit counters. Solution: Use Redis-based rate limiting for distributed systems:Memory Considerations
The limiter stores up to 10,000 unique keys before triggering cleanup. For applications with millions of users, consider:- Using shorter time windows
- Implementing more aggressive cleanup
- Switching to Redis-based solutions
Best Practices
- Always include rate limit headers - Even on successful requests
- Use meaningful keys - Combine user ID + IP for better tracking
- Set appropriate limits - Balance UX and resource protection
- Return proper error codes - Use 429 for rate limit errors
- Include Retry-After header - Help clients implement backoff
- Monitor your limits - Track 429 responses in analytics
- Test your limits - Verify behavior under load