Current Implementation
The Masar Eagle API does not currently enforce global rate limits at the gateway level. However, understanding rate limiting concepts is important for future-proofing your integration.
While global rate limits are not currently enforced, individual services may implement their own rate limiting, and global limits may be introduced in future releases.
Service-Level Limits
Some services implement their own rate limiting policies:
OTP Service Limits
The authentication OTP service enforces the following limits to prevent abuse:
OTP Verification Max Attempts : 5 verification attempts per OTPReset : On successful verification or OTP expiry
OTP Resend Max Resends : 3 times within 30-minute windowCooldown : 1 minute between resend requests
Exceeded Limits Response:
{
"status" : 400 ,
"title" : "OTP limit exceeded" ,
"detail" : "لقد تجاوزت الحد الأقصى لعدد محاولات التحقق" ,
"type" : "https://tools.ietf.org/html/rfc7231#section-6.5.1"
}
Upload Size Limits
File upload endpoints have the following restrictions:
Max Request Size : Unlimited (controlled by Kestrel configuration)
Multipart Body Length : Configured at application level
File Type Validation : Enforced per endpoint
While there’s no hard limit on request size, network timeouts and proxy configurations may effectively limit very large uploads.
Future Rate Limiting
Planned rate limiting features include:
Global API Limits (Planned)
Per-User Limits
Per-Endpoint Limits
Authenticated Requests: 1000 requests/minute
Burst Allowance: 1500 requests/minute
Anonymous Requests: 60 requests/minute
When rate limiting is implemented, responses will include the following headers:
Maximum number of requests allowed in the current window
Number of requests remaining in the current window
Unix timestamp when the rate limit window resets
Seconds to wait before retrying (included in 429 responses)
Example Headers:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Reset: 1710345000
Rate Limit Exceeded Response (Planned)
When rate limits are exceeded, the API will return HTTP 429:
{
"type" : "https://tools.ietf.org/html/rfc6585#section-4" ,
"title" : "Too Many Requests" ,
"status" : 429 ,
"detail" : "Rate limit exceeded. Please retry after 60 seconds." ,
"instance" : "/api/drivers" ,
"errorId" : "ABC123DEF456" ,
"timestamp" : "2024-03-10T14:30:00.000Z" ,
"extensions" : {
"retryAfter" : 60 ,
"limit" : 1000 ,
"windowSeconds" : 60
}
}
Best Practices
Even without enforced rate limits, follow these best practices:
1. Implement Exponential Backoff
Retry failed requests with exponential backoff:
class APIClient {
async requestWithRetry ( url , options , maxRetries = 3 ) {
for ( let attempt = 0 ; attempt < maxRetries ; attempt ++ ) {
try {
const response = await fetch ( url , options );
if ( response . ok ) {
return response ;
}
// Retry on 429 or 5xx errors
if ( response . status === 429 || response . status >= 500 ) {
const retryAfter = response . headers . get ( 'Retry-After' );
const delay = retryAfter
? parseInt ( retryAfter ) * 1000
: Math . pow ( 2 , attempt ) * 1000 ;
console . log ( `Retrying after ${ delay } ms (attempt ${ attempt + 1 } )` );
await this . sleep ( delay );
continue ;
}
// Don't retry on client errors (4xx except 429)
return response ;
} catch ( error ) {
if ( attempt === maxRetries - 1 ) throw error ;
const delay = Math . pow ( 2 , attempt ) * 1000 ;
await this . sleep ( delay );
}
}
}
sleep ( ms ) {
return new Promise ( resolve => setTimeout ( resolve , ms ));
}
}
2. Cache Responses
Cache responses to reduce API calls:
class CachedAPIClient {
constructor () {
this . cache = new Map ();
}
async get ( url , ttl = 60000 ) {
const cached = this . cache . get ( url );
if ( cached && Date . now () - cached . timestamp < ttl ) {
console . log ( 'Returning cached response' );
return cached . data ;
}
const response = await fetch ( url );
const data = await response . json ();
this . cache . set ( url , {
data ,
timestamp: Date . now ()
});
return data ;
}
invalidate ( url ) {
this . cache . delete ( url );
}
clear () {
this . cache . clear ();
}
}
3. Batch Requests
Where possible, batch multiple operations into a single request:
Bad Practice ❌
Good Practice ✅
// Making 100 individual requests
for ( const id of vehicleIds ) {
await fetch ( `/api/vehicles/ ${ id } ` );
}
4. Use Webhooks Instead of Polling
For real-time updates, use webhooks or WebSockets instead of polling:
Bad Practice ❌
Good Practice ✅
// Polling every second
setInterval ( async () => {
const status = await fetch ( '/api/trips/123/status' );
updateUI ( status );
}, 1000 );
5. Monitor Your Usage
Track your API usage to identify optimization opportunities:
class APIMonitor {
constructor () {
this . metrics = {
requests: 0 ,
errors: 0 ,
totalTime: 0
};
}
async track ( request ) {
const start = Date . now ();
this . metrics . requests ++ ;
try {
const response = await request ();
this . metrics . totalTime += Date . now () - start ;
return response ;
} catch ( error ) {
this . metrics . errors ++ ;
throw error ;
}
}
getStats () {
return {
... this . metrics ,
averageTime: this . metrics . totalTime / this . metrics . requests ,
errorRate: this . metrics . errors / this . metrics . requests
};
}
}
Request Throttling Example
Implement client-side throttling to stay within limits:
JavaScript - Request Queue
class ThrottledAPIClient {
constructor ( requestsPerMinute = 60 ) {
this . queue = [];
this . processing = false ;
this . interval = 60000 / requestsPerMinute ;
}
async request ( url , options ) {
return new Promise (( resolve , reject ) => {
this . queue . push ({ url , options , resolve , reject });
this . processQueue ();
});
}
async processQueue () {
if ( this . processing || this . queue . length === 0 ) {
return ;
}
this . processing = true ;
const { url , options , resolve , reject } = this . queue . shift ();
try {
const response = await fetch ( url , options );
resolve ( response );
} catch ( error ) {
reject ( error );
}
setTimeout (() => {
this . processing = false ;
this . processQueue ();
}, this . interval );
}
}
// Usage
const client = new ThrottledAPIClient ( 100 ); // 100 requests/minute
// These will be automatically throttled
for ( let i = 0 ; i < 200 ; i ++ ) {
client . request ( '/api/vehicles' , { method: 'GET' });
}
OTP-Specific Rate Limiting
Handle OTP rate limiting gracefully:
class OTPHandler {
constructor () {
this . lastResendTime = null ;
this . resendCount = 0 ;
this . resendWindow = 30 * 60 * 1000 ; // 30 minutes
}
canResend () {
const now = Date . now ();
// Check cooldown (1 minute)
if ( this . lastResendTime && now - this . lastResendTime < 60000 ) {
return {
allowed: false ,
reason: 'Please wait 1 minute before requesting a new code' ,
waitTime: 60 - Math . floor (( now - this . lastResendTime ) / 1000 )
};
}
// Check max resends (3 per 30 minutes)
if ( this . resendCount >= 3 ) {
const windowExpiry = this . lastResendTime + this . resendWindow ;
if ( now < windowExpiry ) {
return {
allowed: false ,
reason: 'Maximum resend attempts reached' ,
waitTime: Math . floor (( windowExpiry - now ) / 1000 )
};
}
// Reset counter after window expires
this . resendCount = 0 ;
}
return { allowed: true };
}
async resendOTP ( phoneNumber ) {
const canResend = this . canResend ();
if ( ! canResend . allowed ) {
throw new Error ( ` ${ canResend . reason } . Wait ${ canResend . waitTime } seconds.` );
}
const response = await fetch ( '/api/auth/resend-otp' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({ phoneNumber })
});
if ( response . ok ) {
this . lastResendTime = Date . now ();
this . resendCount ++ ;
}
return response ;
}
}
Monitoring and Alerts
Set up monitoring to detect when you’re approaching rate limits:
Request Volume Monitor requests per minute/hour Alert when approaching 80% of limits
Error Rates Track 429 and 5xx responses Alert on error rate spikes
Response Times Monitor average response times Detect performance degradation
Queue Depth Track request queue length Alert on sustained high queue depth
Getting Help
If you encounter rate limiting issues:
Review your integration - Ensure you’re following best practices
Check for inefficiencies - Look for unnecessary API calls or polling
Implement caching - Cache responses where appropriate
Contact support - Reach out if you have legitimate needs for higher limits
Rate limit increases may be available for production integrations with demonstrated need. Contact your account manager for more information.
Next Steps
Error Handling Learn how to handle API errors
Authentication Understand authentication flows