Skip to main content

Overview

To ensure fair usage and maintain service quality, the Postiz API implements rate limiting on all public endpoints.

Default Limits

The default rate limit is:

30 requests per hour

Rate limits are calculated per API key on a rolling one-hour window

Technical Details

  • Window: 3,600,000 milliseconds (1 hour)
  • Limit: 30 requests per window
  • Scope: Per API key
  • Storage: Redis-backed for distributed systems

Custom Limits

Self-hosted instances can configure custom rate limits using the API_LIMIT environment variable:
# Set custom rate limit (e.g., 100 requests per hour)
API_LIMIT=100
Cloud customers on higher-tier plans may have increased rate limits. Contact support for custom limits.

Rate Limit Headers

The API includes rate limit information in response headers:
X-RateLimit-Limit: 30
X-RateLimit-Remaining: 25
X-RateLimit-Reset: 1640995200
X-RateLimit-Limit
integer
Maximum number of requests allowed in the current window
X-RateLimit-Remaining
integer
Number of requests remaining in the current window
X-RateLimit-Reset
integer
Unix timestamp when the rate limit resets

Rate Limit Exceeded

When you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
{
  "statusCode": 429,
  "message": "ThrottlerException: Too Many Requests"
}

Handling Rate Limits

Implement exponential backoff when you hit rate limits:
async function makeRequestWithRetry(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      const resetTime = response.headers.get('X-RateLimit-Reset');
      const waitTime = resetTime 
        ? (parseInt(resetTime) * 1000) - Date.now()
        : Math.pow(2, i) * 1000; // Exponential backoff
      
      console.log(`Rate limited. Waiting ${waitTime}ms...`);
      await new Promise(resolve => setTimeout(resolve, waitTime));
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retries exceeded');
}

Best Practices

Cache API responses when possible to reduce the number of requests. For example, integration lists rarely change.
Use batch endpoints when available. The posts creation endpoint accepts multiple integrations in a single request.
Always check rate limit headers and adjust your request rate proactively before hitting limits.
For real-time updates, consider using webhooks instead of polling the API repeatedly.
Use a queue system to manage API requests and prevent bursts that exceed rate limits.

Rate Limit by Endpoint

Rate limits apply globally across all endpoints. Each API call counts toward your hourly limit, regardless of the endpoint.

Request Cost

All endpoints have the same cost:
EndpointCost
GET /integrations1 request
GET /posts1 request
POST /posts1 request
POST /upload1 request
DELETE /posts/:id1 request

Monitoring Usage

Track Your Usage

Implement client-side tracking to monitor your API usage:
class RateLimitTracker {
  constructor() {
    this.remaining = null;
    this.limit = null;
    this.resetTime = null;
  }
  
  updateFromHeaders(headers) {
    this.limit = parseInt(headers.get('X-RateLimit-Limit'));
    this.remaining = parseInt(headers.get('X-RateLimit-Remaining'));
    this.resetTime = parseInt(headers.get('X-RateLimit-Reset'));
  }
  
  canMakeRequest() {
    if (this.remaining === null) return true;
    return this.remaining > 0;
  }
  
  timeUntilReset() {
    if (!this.resetTime) return 0;
    return Math.max(0, this.resetTime * 1000 - Date.now());
  }
}

Increasing Limits

Need higher rate limits? Here are your options:
1

Upgrade Your Plan

Cloud customers can upgrade to higher-tier plans with increased rate limits.
2

Self-Host

Self-hosted instances can configure custom rate limits using environment variables.
3

Contact Support

Enterprise customers can request custom rate limits tailored to their needs.

Next Steps

Create Post

Start making API requests

List Integrations

Fetch your connected accounts

Build docs developers (and LLMs) love