Overview
Connect World implements an in-memory rate limiter to prevent API abuse and protect against spam, DDoS attacks, and excessive usage. The rate limiter tracks requests per IP address and enforces configurable limits across all critical endpoints.The current implementation uses in-memory storage suitable for single-instance deployments. For production multi-instance setups, migrate to Redis for distributed rate limiting.
Implementation
Core Rate Limiter
The rate limiter is implemented insrc/lib/rateLimiter.ts:30 and uses a simple but effective sliding window algorithm:
IP Address Extraction
The limiter identifies clients by IP address, supporting reverse proxies and CDNs:The function checks
X-Forwarded-For first (common with load balancers), then falls back to X-Real-IP, ensuring accurate client identification behind proxies.Memory Management
The rate limiter automatically cleans up expired entries to prevent memory leaks:Rate Limit Configurations
Each API endpoint has tailored rate limits based on its purpose and sensitivity:Order Creation
Endpoint:/api/ordersLimit: 5 requests per 10 minutes
Purpose: Anti-spam protection for order submissions
Stripe Payment Intents
Endpoint:/api/stripeLimit: 10 requests per 15 minutes
Purpose: Prevent payment intent spam and potential card testing
PayPal Order Creation
Endpoint:/api/paypal/create-orderLimit: 10 requests per 15 minutes
Purpose: Prevent PayPal API abuse and excessive order creation
Best Practices
Choose appropriate limits
Choose appropriate limits
Set limits based on expected legitimate usage patterns. Too strict limits frustrate users; too lenient limits fail to prevent abuse. Monitor your endpoint usage to find the right balance.
Use unique rate limit keys
Use unique rate limit keys
Each endpoint should have its own rate limit key (e.g.,
orders:${ip}, stripe:${ip}). This prevents legitimate usage on one endpoint from affecting others.Return meaningful error messages
Return meaningful error messages
Always return HTTP 429 (Too Many Requests) with a clear message explaining when the user can retry. This improves user experience and reduces support requests.
Consider upgrading to distributed rate limiting
Consider upgrading to distributed rate limiting
For production deployments with multiple instances:
- Use Redis with sliding window counters
- Implement the Token Bucket or Leaky Bucket algorithm
- Consider rate limiting by user ID in addition to IP
- Add rate limit headers (
X-RateLimit-Limit,X-RateLimit-Remaining,X-RateLimit-Reset)
Scaling Considerations
Migration Path
- Set up Redis instance (AWS ElastiCache, Upstash, Redis Cloud)
- Replace the Map with Redis commands:
- Use atomic operations to prevent race conditions
- Add rate limit headers for better client visibility
Security Benefits
- Prevents brute force attacks on payment endpoints
- Mitigates DDoS by limiting requests per IP
- Reduces spam and fraudulent order submissions
- Protects external APIs (Stripe, PayPal) from excessive usage
- Saves costs by preventing API quota exhaustion
Related Documentation
- Data Sanitization - Input validation and cleaning
- Validation - Business logic validation rules
