Overview
The Midday API implements rate limiting to ensure fair usage and maintain service quality for all users. Rate limits are applied per user or IP address depending on the endpoint.Rate Limit Policies
Protected Endpoints (Authenticated)
All authenticated endpoints (requiring API key, OAuth token, or JWT) are rate limited:10 minutes
100 requests per window
User ID (from authenticated session)
/transactions/*/invoices/*/customers/*/documents/*/bank-accounts/*/teams/*/users/*/inbox/*/insights/*/reports/*/tracker-entries/*/tracker-projects/*/tags/*/search/*/chat/*/notifications/*/transcription/*/mcp/*
The tRPC API at
/trpc/* follows the same rate limits as REST endpoints.OAuth Endpoints (Public)
OAuth endpoints have stricter rate limits to prevent abuse:15 minutes
20 requests per window
IP Address
/oauth/authorize/oauth/token/oauth/revoke
Public Endpoints
Some endpoints are not rate limited:/health- Health check/health/ready- Readiness probe/health/dependencies- Dependency health/openapi- OpenAPI specification/- API documentation- File upload endpoints
- Webhook endpoints
- Desktop sync endpoints
Rate Limit Headers
When you make a request, the API includes rate limit information in the response headers:Maximum number of requests allowed in the current window
Number of requests remaining in the current window
Unix timestamp when the rate limit window resets
These headers are provided by the
hono-rate-limiter middleware and may vary based on implementation.Rate Limit Errors
When you exceed the rate limit, the API returns a429 Too Many Requests error:
Handling Rate Limits
Exponential Backoff
Implement exponential backoff when you encounter rate limit errors:Monitoring Rate Limit Usage
Monitor your rate limit consumption to avoid hitting limits:Best Practices
Batch Requests
Use tRPC’s batch request feature to combine multiple queries into a single request
Cache Responses
Cache API responses when possible to reduce request volume
Use Webhooks
Subscribe to webhooks for real-time updates instead of polling
Implement Backoff
Always implement exponential backoff for retry logic
Batching with tRPC
The tRPC API automatically batches requests made in the same event loop:Batched requests count as multiple requests for rate limiting purposes, but they reduce network overhead.
Caching Strategy
Implement intelligent caching to minimize API calls:Increasing Rate Limits
Current rate limits are designed to accommodate typical usage patterns. If you have a legitimate need for higher limits, please contact support.
- Document your use case and expected request volume
- Demonstrate you’ve implemented best practices (caching, batching, backoff)
- Contact Midday support at [email protected]
Implementation Details
The Midday API useshono-rate-limiter middleware for rate limiting:
- Storage: In-memory storage (resets on server restart)
- Algorithm: Fixed window counter
- Identification: User ID for authenticated requests, IP for public endpoints
- Scope: Per-user or per-IP based on endpoint type
Troubleshooting
Why am I being rate limited?
Common causes:- Polling too frequently - Use webhooks or increase polling intervals
- No caching - Cache responses that don’t change frequently
- Sequential requests - Batch requests when possible
- Multiple API keys for same user - Rate limits apply per user, not per API key
How do I know when I can retry?
Check theX-RateLimit-Reset header for the timestamp when your limit resets: