Overview
The e-commerce API implements rate limiting to ensure fair usage, prevent abuse, and maintain service quality for all users. Rate limits are applied at different levels depending on the type of operation.Rate Limit Configuration
The API uses multiple rate limiting strategies based on the operation type:Kinesis Stream Rate Limiting
Data streaming to AWS Kinesis is rate-limited to prevent overwhelming the downstream processing pipeline:settings.py
- Order event streaming (
KINESIS_ORDERS_STREAM) - Product update streaming (
KINESIS_PRODUCT_STREAM) - Product update events (
KINESIS_PRODUCT_UPDATE_STREAM)
Kinesis rate limits are configured per environment and may vary between staging and production.
Background Queue Processing
Celery task queues have visibility timeouts to prevent duplicate processing:settings.py
- Tasks are not picked up by multiple workers
- Failed tasks can be retried after timeout
- System resources are used efficiently
Authentication Rate Limits
To prevent brute force attacks and abuse, authentication endpoints have strict limits:OTP Generation and Verification
settings.py
OTP Rate Limit Logic
OTP Rate Limit Logic
Verification Attempts:
- Users can verify OTP up to 3 times
- Exceeding limit triggers temporary account block
- Users can request new OTP 2 times per session
- Additional requests require waiting period
- Users can resend OTP up to 6 times
- After limit, must wait for cooldown period
- Failed attempts result in 24-hour temporary block
- Error message: “Your Account has been blocked due to multiple fail attempts”
User Account Creation
To prevent duplicate account creation and abuse:settings.py
- Lock is placed on email/phone during registration
- Lock expires after 5 minutes
- Prevents duplicate account creation race conditions
- Returns friendly message if creation is in progress
Transaction Rate Limits
Order placement is rate-limited to prevent duplicate transactions:settings.py
Transaction Lock
Prevents duplicate order submission
- Lock duration: 15 minutes
- Applied per user + cart combination
- Automatically released after completion
User Restrictions
Limits concurrent transactions per user
- Prevents payment processing conflicts
- Protects against accidental double-charges
- Returns clear error message when active
Callback Request Limits
Customer support callback requests are rate-limited:settings.py
- One callback request per 24-hour period
- Additional requests show existing callback time window
- Ensures efficient support resource allocation
Ozonetel Callback Retries
settings.py
- Maximum automatic retries for failed calls
- Configurable retry intervals
- Prevents overwhelming support infrastructure
Email and Refund Limits
Automated Refund Emails
settings.py
Rate Limit Headers
The API currently does not expose rate limit information in response headers. Rate limits are enforced server-side and return error responses when exceeded.
Transaction Lock Response
User Creation Lock Response
Multiple Callback Request Response
Account Block Response
Handling 429 Responses
While the API uses HTTP 400/401 for most rate limit scenarios, future versions may implement HTTP 429 (Too Many Requests) responses.
1. Implement Exponential Backoff
2. Display Clear User Messages
3. Prevent Duplicate Submissions
4. Monitor Rate Limit Errors
Best Practices
For Client Applications
Cache Responses
Cache GET requests to reduce API calls
- Use appropriate cache TTLs
- Implement stale-while-revalidate
- Respect cache-control headers
Debounce User Input
Delay API calls for search/filter operations
- 300-500ms debounce for autocomplete
- Prevent typing from triggering too many requests
Batch Operations
Combine multiple requests when possible
- Use batch endpoints if available
- Queue non-critical updates
Handle Errors Gracefully
Provide clear feedback to users
- Show progress indicators
- Display helpful error messages
- Offer retry options
For Server Applications
-
Implement Request Queuing
- Use a queue system for bulk operations
- Process items in batches
- Respect rate limits in queue workers
-
Use Service Accounts Wisely
- Don’t share credentials across services
- Implement proper authentication for each integration
- Monitor API usage per service
-
Implement Circuit Breakers
Monitoring Your Usage
While the API doesn’t expose usage metrics in headers, you should:- Log all API requests in your application
- Track error rates especially for rate limit errors
- Monitor response times to detect performance degradation
- Set up alerts for repeated rate limit errors
Contact Support
If you’re consistently hitting rate limits:- Review your implementation for inefficient API usage
- Consider caching strategies to reduce request volume
- Contact API support to discuss higher rate limits for your use case
- Provide usage patterns and business justification for limit increases
Rate limits are designed to ensure fair usage and system stability. If you have legitimate high-volume use cases, reach out to discuss enterprise solutions.