Skip to main content

Overview

The Midday API implements rate limiting to ensure fair usage and maintain service quality for all users. Rate limits are applied per user or IP address depending on the endpoint.

Rate Limit Policies

Protected Endpoints (Authenticated)

All authenticated endpoints (requiring API key, OAuth token, or JWT) are rate limited:
Window
duration
10 minutes
Limit
number
100 requests per window
Key
identifier
User ID (from authenticated session)
Rate limits apply to:
  • /transactions/*
  • /invoices/*
  • /customers/*
  • /documents/*
  • /bank-accounts/*
  • /teams/*
  • /users/*
  • /inbox/*
  • /insights/*
  • /reports/*
  • /tracker-entries/*
  • /tracker-projects/*
  • /tags/*
  • /search/*
  • /chat/*
  • /notifications/*
  • /transcription/*
  • /mcp/*
The tRPC API at /trpc/* follows the same rate limits as REST endpoints.

OAuth Endpoints (Public)

OAuth endpoints have stricter rate limits to prevent abuse:
Window
duration
15 minutes
Limit
number
20 requests per window
Key
identifier
IP Address
Rate limits apply to:
  • /oauth/authorize
  • /oauth/token
  • /oauth/revoke

Public Endpoints

Some endpoints are not rate limited:
  • /health - Health check
  • /health/ready - Readiness probe
  • /health/dependencies - Dependency health
  • /openapi - OpenAPI specification
  • / - API documentation
  • File upload endpoints
  • Webhook endpoints
  • Desktop sync endpoints

Rate Limit Headers

When you make a request, the API includes rate limit information in the response headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1234567890
X-RateLimit-Limit
number
Maximum number of requests allowed in the current window
X-RateLimit-Remaining
number
Number of requests remaining in the current window
X-RateLimit-Reset
timestamp
Unix timestamp when the rate limit window resets
These headers are provided by the hono-rate-limiter middleware and may vary based on implementation.

Rate Limit Errors

When you exceed the rate limit, the API returns a 429 Too Many Requests error:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json

{
  "error": "Rate limit exceeded",
  "message": "Rate limit exceeded"
}
When you receive a 429 error, you should wait until the rate limit window resets before making additional requests.

Handling Rate Limits

Exponential Backoff

Implement exponential backoff when you encounter rate limit errors:
async function fetchWithRetry(
  url: string,
  options: RequestInit,
  maxRetries = 3
) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      // Rate limited - wait and retry
      const retryAfter = response.headers.get('Retry-After');
      const waitTime = retryAfter 
        ? parseInt(retryAfter) * 1000 
        : Math.pow(2, i) * 1000; // Exponential backoff
      
      console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
      await new Promise(resolve => setTimeout(resolve, waitTime));
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retries exceeded');
}

// Usage
const response = await fetchWithRetry(
  'https://api.midday.ai/transactions',
  {
    headers: {
      'Authorization': `Bearer ${apiKey}`,
    },
  }
);

Monitoring Rate Limit Usage

Monitor your rate limit consumption to avoid hitting limits:
const response = await fetch('https://api.midday.ai/transactions', {
  headers: {
    'Authorization': `Bearer ${apiKey}`,
  },
});

const limit = parseInt(response.headers.get('X-RateLimit-Limit') || '0');
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
const reset = parseInt(response.headers.get('X-RateLimit-Reset') || '0');

const percentUsed = ((limit - remaining) / limit) * 100;

if (percentUsed > 80) {
  console.warn(`Rate limit usage: ${percentUsed.toFixed(1)}%`);
  console.warn(`Resets at: ${new Date(reset * 1000).toISOString()}`);
}

const data = await response.json();

Best Practices

Batch Requests

Use tRPC’s batch request feature to combine multiple queries into a single request

Cache Responses

Cache API responses when possible to reduce request volume

Use Webhooks

Subscribe to webhooks for real-time updates instead of polling

Implement Backoff

Always implement exponential backoff for retry logic

Batching with tRPC

The tRPC API automatically batches requests made in the same event loop:
import { createTRPCProxyClient, httpBatchLink } from '@trpc/client';

const client = createTRPCProxyClient<AppRouter>({
  links: [
    httpBatchLink({
      url: 'https://api.midday.ai/trpc',
      headers: {
        Authorization: `Bearer ${apiKey}`,
      },
      // Configure batching
      maxURLLength: 2083,
    }),
  ],
});

// These queries will be batched into a single HTTP request
const [transactions, invoices, customers] = await Promise.all([
  client.transactions.get.query(),
  client.invoices.get.query(),
  client.customers.get.query(),
]);
Batched requests count as multiple requests for rate limiting purposes, but they reduce network overhead.

Caching Strategy

Implement intelligent caching to minimize API calls:
class MiddayClient {
  private cache = new Map<string, { data: any; expires: number }>();
  
  async getCached<T>(
    key: string,
    fetcher: () => Promise<T>,
    ttl: number = 60000 // 1 minute default
  ): Promise<T> {
    const cached = this.cache.get(key);
    
    if (cached && cached.expires > Date.now()) {
      return cached.data;
    }
    
    const data = await fetcher();
    
    this.cache.set(key, {
      data,
      expires: Date.now() + ttl,
    });
    
    return data;
  }
  
  async getTransactions() {
    return this.getCached(
      'transactions',
      () => client.transactions.get.query(),
      300000 // 5 minutes
    );
  }
}

Increasing Rate Limits

Current rate limits are designed to accommodate typical usage patterns. If you have a legitimate need for higher limits, please contact support.
To request higher rate limits:
  1. Document your use case and expected request volume
  2. Demonstrate you’ve implemented best practices (caching, batching, backoff)
  3. Contact Midday support at [email protected]

Implementation Details

The Midday API uses hono-rate-limiter middleware for rate limiting:
  • Storage: In-memory storage (resets on server restart)
  • Algorithm: Fixed window counter
  • Identification: User ID for authenticated requests, IP for public endpoints
  • Scope: Per-user or per-IP based on endpoint type
Rate limit counters may reset during deployments or server restarts. This is not guaranteed behavior and should not be relied upon.

Troubleshooting

Why am I being rate limited?

Common causes:
  1. Polling too frequently - Use webhooks or increase polling intervals
  2. No caching - Cache responses that don’t change frequently
  3. Sequential requests - Batch requests when possible
  4. Multiple API keys for same user - Rate limits apply per user, not per API key

How do I know when I can retry?

Check the X-RateLimit-Reset header for the timestamp when your limit resets:
const resetTimestamp = parseInt(
  response.headers.get('X-RateLimit-Reset') || '0'
);
const resetDate = new Date(resetTimestamp * 1000);
const waitMs = resetDate.getTime() - Date.now();

console.log(`Rate limit resets in ${waitMs}ms`);

Are rate limits shared between tRPC and REST?

Yes. Rate limits are applied at the user level, so requests to tRPC and REST endpoints share the same counter.
// These both count toward the same 100 req/10min limit
await fetch('https://api.midday.ai/transactions', { /* ... */ });
await client.transactions.get.query();

Build docs developers (and LLMs) love