Rate Limit Configuration
Default Limits
By default, the API allows 120 requests per minute per authenticated user. This limit applies to all endpoints under the/api/v1 prefix.
Customizing Rate Limits
Administrators can adjust the rate limit by setting theAPI_THROTTLE_PER_MINUTE environment variable:
.env
After changing this value, restart your web server for the changes to take effect.
Rate Limit Headers
Every API response includes headers that inform you about your current rate limit status:The maximum number of requests allowed per minute (e.g.,
120)The number of requests remaining in the current window (e.g.,
115)The number of seconds until the rate limit resets (e.g.,
45)The Unix timestamp when the rate limit will reset (e.g.,
1710507600)Only present when rate limited. Seconds to wait before retrying (e.g.,
60)Example Response Headers
Rate Limit Exceeded
When you exceed the rate limit, the API returns an HTTP429 Too Many Requests status code:
Handling Rate Limit Errors
When you receive a429 response:
- Check the
Retry-Afterheader to determine how long to wait - Pause your requests for the specified duration
- Implement exponential backoff if you continue hitting limits
- Review your request patterns to optimize API usage
Implementation Details
Rate limiting in Snipe-IT uses Laravel’s built-in throttling middleware with custom extensions:Middleware Implementation
Middleware Implementation
The API uses the This middleware extends Laravel’s
api-throttle middleware defined in /routes/api.php:ThrottleRequests class with additional header support.Rate Limit Calculation
Rate Limit Calculation
- Limits are calculated per authenticated user
- The counter resets every 60 seconds (sliding window)
- Each request increments the counter by 1
- Limits are enforced before the request is processed
Storage Backend
Storage Backend
Rate limit counters are stored using your configured cache driver (file, Redis, Memcached, etc.). Configure this in your
.env:Best Practices
1. Monitor Rate Limit Headers
Always check theX-RateLimit-Remaining header to track your usage:
2. Implement Backoff Strategies
Handle rate limit errors gracefully with exponential backoff:3. Batch Requests Efficiently
Use pagination and filters to reduce the number of requests:The maximum results per request is controlled by the
MAX_RESULTS setting (default: 500).4. Optimize Request Frequency
Use Webhooks
Consider using webhooks for event-driven updates instead of polling
Cache Responses
Cache API responses locally when data doesn’t change frequently
Schedule Bulk Operations
Run large batch operations during off-peak hours
Parallel Tokens
Use multiple API tokens for different services if needed
5. Handle Edge Cases
Python Complete Example
Troubleshooting
Consistently hitting rate limits
Consistently hitting rate limits
- Increase the limit: Adjust
API_THROTTLE_PER_MINUTEin your.envfile - Optimize queries: Use filters and pagination to reduce request volume
- Implement caching: Cache frequently accessed data
- Review architecture: Consider if your integration pattern is optimal
Rate limit headers missing
Rate limit headers missing
- Ensure you’re using
/api/v1endpoints (rate limiting is only applied to API routes) - Check that the
api-throttlemiddleware is active inroutes/api.php - Verify your cache driver is working correctly
Different users, same limit
Different users, same limit
Rate limits are per-user, based on the authenticated user’s API token. If you’re seeing shared limits:
- Verify you’re using different tokens for different users
- Check if requests are being proxied through a single account
Rate limit resets too slowly
Rate limit resets too slowly
Rate limits use a sliding 60-second window. If you:
- Make 120 requests at 0:00
- Wait until 0:30
- Make another request
Rate Limit Monitoring
For production integrations, implement monitoring to track rate limit usage:Monitoring Example
Next Steps
Authentication
Learn about API token management
Pagination
Efficiently handle large datasets
