Skip to main content
The Firecrawl API implements rate limiting to ensure fair usage and maintain service quality for all users. This page explains how rate limits work and how to handle them in your application.

Rate Limit Response

When you exceed the rate limit, the API will return a 429 Too Many Requests status code with the following response:
{
  "success": false,
  "error": "Request rate limit exceeded. Please wait and try again later."
}

Rate Limit Strategy

Plan-Based Limits

Rate limits vary based on your subscription plan. Higher-tier plans receive increased rate limits to support larger-scale operations.

Endpoint-Specific Limits

Different endpoints may have different rate limits:
  • Scraping endpoints (/scrape, /batch/scrape) - Limited by requests per minute
  • Crawling endpoints (/crawl) - Limited by concurrent crawls and pages per crawl
  • Search endpoint (/search) - Limited by searches per minute
  • Extract endpoint (/extract) - Limited by extraction requests per minute
  • Research endpoint (/deep-research) - Limited by concurrent research operations

Handling Rate Limits

Best Practices

When you receive a 429 response, wait before retrying. Use exponential backoff to gradually increase wait times:
import time
import requests

def scrape_with_retry(url, api_key, max_retries=5):
    base_wait = 1  # Start with 1 second
    
    for attempt in range(max_retries):
        response = requests.post(
            'https://api.firecrawl.dev/v1/scrape',
            headers={
                'Authorization': f'Bearer {api_key}',
                'Content-Type': 'application/json'
            },
            json={'url': url}
        )
        
        if response.status_code == 429:
            wait_time = base_wait * (2 ** attempt)
            print(f"Rate limited. Waiting {wait_time} seconds...")
            time.sleep(wait_time)
            continue
        
        return response.json()
    
    raise Exception("Max retries exceeded")
Instead of making individual requests, use batch operations when scraping multiple URLs:
curl -X POST 'https://api.firecrawl.dev/v1/batch/scrape' \
  -H 'Authorization: Bearer fc-YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
    "urls": [
      "https://example.com/page1",
      "https://example.com/page2",
      "https://example.com/page3"
    ]
  }'
Batch operations count as a single request against your rate limit while processing multiple URLs.
Track your API usage to stay within limits:
curl -X GET 'https://api.firecrawl.dev/v1/team/credit-usage' \
  -H 'Authorization: Bearer fc-YOUR_API_KEY'
Response:
{
  "success": true,
  "data": {
    "remaining_credits": 1000
  }
}
For crawling and batch operations, use webhooks instead of polling for status:
{
  "url": "https://docs.firecrawl.dev",
  "webhook": {
    "url": "https://your-domain.com/webhook",
    "events": ["completed", "failed"]
  }
}
This reduces the number of status check requests you need to make.

Credit System

In addition to rate limits, Firecrawl uses a credit-based system:

Credit Consumption

  • Scrape: 1 credit per page
  • Crawl: 1 credit per page crawled
  • Batch Scrape: 1 credit per URL
  • Search: Credits vary based on scraping options
  • Extract: Token-based pricing (separate from credits)
  • Map: 1 credit per request

Insufficient Credits

When you run out of credits, you’ll receive a 402 Payment Required response:
{
  "success": false,
  "error": "Payment required to access this resource."
}
To continue using the API, you’ll need to upgrade your plan or purchase additional credits.

Crawl-Specific Limits

Concurrent Crawls

The number of simultaneous crawl operations you can run depends on your plan:
# Check active crawls
curl -X GET 'https://api.firecrawl.dev/v1/crawl/active' \
  -H 'Authorization: Bearer fc-YOUR_API_KEY'

Pages Per Crawl

You can limit the number of pages in a single crawl using the limit parameter:
{
  "url": "https://docs.firecrawl.dev",
  "limit": 100,
  "scrapeOptions": {
    "formats": ["markdown"]
  }
}
The default limit is 10,000 pages per crawl.

Crawl Delays

Respect website rate limits by adding delays between requests:
{
  "url": "https://example.com",
  "delay": 2,
  "scrapeOptions": {
    "formats": ["markdown"]
  }
}
The delay parameter specifies the number of seconds to wait between scraping pages.

Error Handling

Rate Limit Headers

While not explicitly documented in all responses, monitor HTTP status codes to detect rate limiting:
response = requests.post(
    'https://api.firecrawl.dev/v1/scrape',
    headers=headers,
    json=data
)

if response.status_code == 429:
    # Rate limited - implement backoff
    pass
elif response.status_code == 402:
    # Insufficient credits
    pass
elif response.status_code == 200:
    # Success
    result = response.json()

Server Errors

Occasional 500 Internal Server Error responses may occur. These are different from rate limits and should be retried with exponential backoff:
{
  "success": false,
  "error": "An unexpected error occurred on the server."
}

Optimizing API Usage

1. Use the Right Endpoint

  • Use /map to discover URLs before crawling
  • Use /batch/scrape for known URL lists
  • Use /crawl for comprehensive site scraping

2. Filter Content Efficiently

Use crawl options to reduce unnecessary requests:
{
  "url": "https://firecrawl.dev",
  "includePaths": ["blog/.*"],
  "excludePaths": ["admin/.*", "login.*"],
  "limit": 50
}

3. Request Only Needed Formats

Specify only the formats you need:
{
  "url": "https://example.com",
  "formats": ["markdown"]
}
Avoid requesting multiple formats (HTML, screenshot, etc.) if you don’t need them.

SDK Rate Limit Handling

Our official SDKs include built-in rate limit handling:

Python SDK

from firecrawl import Firecrawl

app = Firecrawl(api_key="fc-YOUR_API_KEY")

try:
    result = app.scrape("https://firecrawl.dev", formats=["markdown"])
    print(result.markdown)
except Exception as e:
    # SDK handles retries automatically
    print(f"Error: {e}")

Node.js SDK

import Firecrawl from '@mendable/firecrawl-js';

const app = new Firecrawl({ apiKey: 'fc-YOUR_API_KEY' });

try {
  const result = await app.scrape('https://firecrawl.dev', {
    formats: ['markdown']
  });
  console.log(result.markdown);
} catch (error) {
  // SDK handles retries automatically
  console.error('Error:', error);
}

Contact Support

If you’re experiencing consistent rate limiting issues or need higher limits:
  1. Review your usage patterns and optimize requests
  2. Consider upgrading to a higher-tier plan
  3. Contact [email protected] to discuss custom rate limits
For plan details and pricing, visit firecrawl.dev/pricing.

Build docs developers (and LLMs) love