Skip to main content
Twitter API enforces rate limits to prevent abuse and ensure fair usage. Understanding these limits is crucial for building reliable applications with Twikit.

How rate limits work

Rate limits in the Twitter API:
  • Reset every 15 minutes: All rate limits have a 15-minute window
  • Per endpoint: Each API endpoint has its own limit
  • Per account: Limits are tracked separately for each authenticated account
  • Different for guests: GuestClient has lower limits than authenticated Client
When you hit a rate limit, you must wait until the current 15-minute window expires before making more requests to that endpoint.

Rate limit table

Here are the rate limits for common Twikit functions (limits reset every 15 minutes):

User operations

FunctionLimitEndpoint
get_user_by_screen_name95UserByScreenName
get_user_by_id500UserByRestId
get_user_followers50Followers
get_user_following500Following
get_user_verified_followers500BlueVerifiedFollowers
get_user_followers_you_know500FollowersYouKnow
follow_user15friendships/create.json
unfollow_user187friendships/destroy.json
block_user187blocks/create.json
unblock_user187blocks/destroy.json
mute_user187mutes/users/create.json
unmute_user187mutes/users/destroy.json

Tweet operations

FunctionLimitEndpoint
get_tweet_by_id150TweetDetail
search_tweet50SearchTimeline
get_user_tweets (Tweets)50UserTweets
get_user_tweets (Replies)50UserTweetsAndReplies
get_user_tweets (Media)500UserMedia
get_user_tweets (Likes)500Likes
get_favoriters500Favoriters
get_retweeters500Retweeters
create_tweet-CreateTweet
delete_tweet-DeleteTweet
retweet-CreateRetweet
delete_retweet-DeleteRetweet
favorite_tweet-FavoriteTweet
unfavorite_tweet-UnfavoriteTweet

Timeline operations

FunctionLimitEndpoint
get_timeline500HomeTimeline
get_latest_timeline500HomeLatestTimeline

List operations

FunctionLimitEndpoint
get_list500ListByRestId
get_list_tweets500ListLatestTweetsTimeline
get_lists500ListsManagementPageTimeline
get_list_members500ListMembers
get_list_subscribers500ListSubscribers
create_list-CreateList
add_list_member-ListAddMember
remove_list_member-ListRemoveMember

Direct messages

FunctionLimitEndpoint
get_dm_history900conversation/.json
send_dm187dm/new2.json
delete_dm-DMMessageDeleteMutation

Other operations

FunctionLimitEndpoint
get_bookmarks500Bookmarks
get_notifications (All)180notifications/all.json
get_notifications (Mentions)180notifications/mentions.json
get_trends20000guide.json
login187onboarding/task.json
logout187account/logout.json
search_user50SearchTimeline
Functions marked with ”-” have no documented rate limit or are unlimited.

Handling rate limit errors

When you exceed a rate limit, Twikit raises a TooManyRequests exception:
from twikit import Client
from twikit.errors import TooManyRequests
import asyncio
import time

client = Client('en-US')

async def search_with_retry(query):
    try:
        tweets = await client.search_tweet(query, 'Latest')
        return tweets
    except TooManyRequests as e:
        # Get the reset time from the exception
        reset_time = e.rate_limit_reset
        if reset_time:
            wait_time = reset_time - int(time.time())
            print(f'Rate limited. Waiting {wait_time} seconds...')
            await asyncio.sleep(wait_time)
            # Retry after waiting
            return await client.search_tweet(query, 'Latest')
        else:
            print('Rate limited. Waiting 15 minutes...')
            await asyncio.sleep(900)  # Wait 15 minutes
            return await client.search_tweet(query, 'Latest')
The TooManyRequests exception includes a rate_limit_reset attribute with a Unix timestamp indicating when the limit resets. Use this to calculate the exact wait time.

Best practices for staying under limits

1. Use pagination wisely

Don’t request more data than you need at once:
# Good: Request smaller batches
tweets = await client.search_tweet('python', 'Latest', count=20)

# Process the first batch
for tweet in tweets:
    process_tweet(tweet)

# Get more only if needed
if need_more:
    more_tweets = await tweets.next()

2. Implement exponential backoff

Retry failed requests with increasing delays:
import asyncio
from twikit.errors import TooManyRequests

async def fetch_with_backoff(fetch_func, max_retries=3):
    """Fetch data with exponential backoff on rate limits."""
    for attempt in range(max_retries):
        try:
            return await fetch_func()
        except TooManyRequests as e:
            if attempt == max_retries - 1:
                raise
            
            wait_time = 2 ** attempt * 60  # 1m, 2m, 4m
            if e.rate_limit_reset:
                wait_time = min(wait_time, e.rate_limit_reset - int(time.time()))
            
            print(f'Rate limited. Retrying in {wait_time}s (attempt {attempt + 1}/{max_retries})')
            await asyncio.sleep(wait_time)

# Usage
tweets = await fetch_with_backoff(
    lambda: client.search_tweet('query', 'Latest')
)

3. Cache responses

Store results to avoid repeated requests:
import asyncio
from datetime import datetime, timedelta

class CachedClient:
    def __init__(self, client):
        self.client = client
        self.cache = {}
    
    async def get_user_cached(self, screen_name, cache_duration=300):
        """Get user with 5-minute cache."""
        now = datetime.now()
        
        if screen_name in self.cache:
            user, cached_at = self.cache[screen_name]
            if now - cached_at < timedelta(seconds=cache_duration):
                return user
        
        # Fetch from API
        user = await self.client.get_user_by_screen_name(screen_name)
        self.cache[screen_name] = (user, now)
        return user

4. Batch operations

Group related operations to minimize requests:
# Bad: Multiple separate requests
for user_id in user_ids:
    user = await client.get_user_by_id(user_id)
    await asyncio.sleep(1)  # Still wasteful

# Good: Use pagination features
# Many Twikit methods return Result objects that efficiently handle batching
followers = await user.get_followers(count=100)  # Get 100 at once
for follower in followers:
    process_follower(follower)

5. Monitor your usage

Track how many requests you’re making:
class RateLimitTracker:
    def __init__(self, client):
        self.client = client
        self.request_counts = {}
        self.window_start = time.time()
    
    async def search_tweet(self, query, product, **kwargs):
        """Track search requests."""
        endpoint = 'SearchTimeline'
        
        # Reset counter every 15 minutes
        if time.time() - self.window_start > 900:
            self.request_counts = {}
            self.window_start = time.time()
        
        # Increment counter
        self.request_counts[endpoint] = self.request_counts.get(endpoint, 0) + 1
        
        print(f'{endpoint}: {self.request_counts[endpoint]}/50 requests used')
        
        return await self.client.search_tweet(query, product, **kwargs)

Pagination strategies

Many Twikit methods return Result objects that support pagination:
# Initial request
tweets = await client.search_tweet('python', 'Latest', count=20)

# Process first batch
for tweet in tweets:
    print(tweet.text)

# Get more results (uses cursor internally)
more_tweets = await tweets.next()
for tweet in more_tweets:
    print(tweet.text)

# Continue paginating
even_more = await more_tweets.next()
Each call to .next() counts as a separate API request. Plan your pagination carefully to stay within rate limits.

Rate limits for AsyncGenerator methods

Some methods return AsyncGenerator for streaming results:
# This method yields tweets as they're fetched
async for tweet in client.get_user_tweets_by_id(user_id, 'Tweets'):
    print(tweet.text)
    
    # Each iteration may trigger an API request when fetching the next batch
    # Be mindful of rate limits (50 requests/15min for UserTweets)

What happens when you hit the limit

  1. API returns 429 status: Twitter’s API responds with HTTP 429 (Too Many Requests)
  2. Twikit raises exception: TooManyRequests is raised with error details
  3. Wait required: You must wait until the rate limit resets (up to 15 minutes)
  4. Headers provide info: The exception includes rate_limit_reset timestamp
try:
    tweets = await client.search_tweet('query', 'Latest')
except TooManyRequests as e:
    print(f'Rate limit hit!')
    print(f'Reset at: {e.rate_limit_reset}')
    print(f'Message: {e}')
    print(f'Headers: {e.headers}')

Error handling

Learn to handle TooManyRequests and other errors

Async usage

Use async/await for efficient request handling

Build docs developers (and LLMs) love