Overview
Twikit provides powerful search capabilities to find tweets based on keywords, hashtags, or phrases. You can filter results by type and paginate through large result sets efficiently.
Basic search
Search for tweets using a keyword or phrase:
import asyncio
from twikit import Client
client = Client( 'en-US' )
async def main ():
await client.login(
auth_info_1 = 'USERNAME' ,
auth_info_2 = 'EMAIL' ,
password = 'PASSWORD'
)
# Search for tweets
tweets = await client.search_tweet( 'python programming' , 'Latest' )
# Process results
for tweet in tweets:
print ( f ' { tweet.user.name } : { tweet.text } ' )
asyncio.run(main())
Search filter types
Twikit supports multiple filter types to refine your search results:
Latest
Top
People
Photos
Videos
# Search for the most recent tweets
tweets = await client.search_tweet( 'twikit' , 'Latest' )
Available filter types: 'Latest', 'Top', 'People', 'Photos', 'Videos'
Search results are paginated. Use the next() method to fetch additional tweets:
import asyncio
from twikit import Client
client = Client( 'en-US' )
async def main ():
await client.login(
auth_info_1 = 'USERNAME' ,
auth_info_2 = 'EMAIL' ,
password = 'PASSWORD'
)
# Get first page of results
tweets = await client.search_tweet( 'machine learning' , 'Latest' )
for tweet in tweets:
print (tweet.text)
# Get next page of results
more_tweets = await tweets.next()
for tweet in more_tweets:
print (tweet.text)
asyncio.run(main())
Processing search results
Access various attributes of tweets in your search results:
import asyncio
from twikit import Client
client = Client( 'en-US' )
async def main ():
await client.login(
auth_info_1 = 'USERNAME' ,
auth_info_2 = 'EMAIL' ,
password = 'PASSWORD'
)
tweets = await client.search_tweet( 'data science' , 'Top' )
for tweet in tweets:
print ( f 'Tweet ID: { tweet.id } ' )
print ( f 'Author: { tweet.user.name } (@ { tweet.user.screen_name } )' )
print ( f 'Text: { tweet.text } ' )
print ( f 'Likes: { tweet.favorite_count } ' )
print ( f 'Retweets: { tweet.retweet_count } ' )
print ( f 'Created: { tweet.created_at } ' )
print ( f 'Has media: { bool (tweet.media) } ' )
print ( '-' * 50 )
asyncio.run(main())
Complete working example
Here’s a full example that searches for tweets and paginates through results:
import asyncio
from twikit import Client
# Enter your account information
USERNAME = '...'
EMAIL = '...'
PASSWORD = '...'
client = Client( 'en-US' )
async def main ():
# Login
await client.login(
auth_info_1 = USERNAME ,
auth_info_2 = EMAIL ,
password = PASSWORD
)
# Search latest tweets
tweets = await client.search_tweet( 'artificial intelligence' , 'Latest' )
print ( f 'Found { len (tweets) } tweets' )
for tweet in tweets:
print ( f '@ { tweet.user.screen_name } : { tweet.text[: 100 ] } ...' )
# Search more tweets (pagination)
more_tweets = await tweets.next()
print ( f ' \n Found { len (more_tweets) } more tweets' )
for tweet in more_tweets:
print ( f '@ { tweet.user.screen_name } : { tweet.text[: 100 ] } ...' )
asyncio.run(main())
Advanced search patterns
You can use advanced search operators in your queries:
import asyncio
from twikit import Client
client = Client( 'en-US' )
async def main ():
await client.login(
auth_info_1 = 'USERNAME' ,
auth_info_2 = 'EMAIL' ,
password = 'PASSWORD'
)
# Search with hashtag
tweets = await client.search_tweet( '#python' , 'Latest' )
# Search from specific user
tweets = await client.search_tweet( 'from:elonmusk' , 'Latest' )
# Search with multiple keywords
tweets = await client.search_tweet( 'python OR javascript' , 'Latest' )
# Search excluding certain words
tweets = await client.search_tweet( 'programming -java' , 'Latest' )
asyncio.run(main())
Use Twitter’s search operators like from:, to:, #hashtag, OR, and - to create more specific queries.
Collecting large datasets
For collecting many tweets, use a loop to paginate through results:
import asyncio
from twikit import Client
client = Client( 'en-US' )
async def main ():
await client.login(
auth_info_1 = 'USERNAME' ,
auth_info_2 = 'EMAIL' ,
password = 'PASSWORD'
)
all_tweets = []
tweets = await client.search_tweet( 'climate change' , 'Latest' )
# Collect tweets from multiple pages
for _ in range ( 5 ): # Get 5 pages
all_tweets.extend(tweets)
print ( f 'Collected { len (all_tweets) } tweets so far...' )
try :
tweets = await tweets.next()
await asyncio.sleep( 2 ) # Rate limiting
except :
break
print ( f 'Total tweets collected: { len (all_tweets) } ' )
asyncio.run(main())
Be mindful of rate limits when searching and paginating through results. Add delays between requests to avoid being temporarily restricted.
Key points
Use appropriate filters
Choose the right filter type (Latest, Top, Photos, etc.) based on your needs
Paginate efficiently
Use the next() method to fetch additional results beyond the first page
Access tweet attributes
Extract relevant information like text, author, engagement metrics from results
Add rate limiting
Include delays between requests when collecting large datasets