Skip to main content

Overview

Twitter/X employs sophisticated anti-bot detection mechanisms. Following these best practices helps avoid account restrictions, rate limits, and IP bans.

Anti-Detection Strategies

twitter-cli automatically extracts all cookies from your browser, not just auth_token and ct0. This provides a complete browser fingerprint that makes requests indistinguishable from real browser traffic.
# Automatic browser extraction (recommended)
twitter feed

# Manual environment variables (less safe)
export TWITTER_AUTH_TOKEN="your_token"
export TWITTER_CT0="your_ct0"
Why full cookie extraction is better:
  • Includes session cookies that validate browser state
  • Preserves tracking cookies that Twitter expects
  • Matches the exact cookie set from your browser session
  • Reduces detection risk compared to partial auth

TLS Fingerprint Matching

twitter-cli uses curl_cffi to impersonate real Chrome TLS fingerprints. The client automatically:
  • Matches the Chrome version installed on your system
  • Generates authentic TLS handshakes
  • Sends correct cipher suites and extensions
  • Mimics browser HTTP/2 settings
No configuration required - this happens automatically.

Request Timing Jitter

The client adds randomized delays between requests to avoid pattern detection:
# config.yaml
rateLimit:
  requestDelay: 2.5     # Base delay in seconds
  # Actual delay is randomized: 2.5 × (0.7 to 1.5) = 1.75s to 3.75s
This prevents the consistent timing patterns that bot detectors look for.

Transaction ID Generation

Every request includes a dynamically generated x-client-transaction-id header that mimics real browser behavior. This is handled automatically.

Proxy Configuration

Why Use a Proxy?

  1. IP Protection: Avoid exposing your real IP to Twitter’s rate limiting
  2. Geographic Distribution: Rotate IPs to simulate natural access patterns
  3. Ban Mitigation: If one IP gets rate limited, switch to another
  4. Residential Safety: Residential IPs are far less likely to be flagged than datacenter IPs

Setting Up a Proxy

# HTTP proxy
export TWITTER_PROXY=http://127.0.0.1:7890

# SOCKS5 proxy (recommended for privacy)
export TWITTER_PROXY=socks5://127.0.0.1:1080

# With authentication
export TWITTER_PROXY=http://user:pass@proxy.example.com:8080
All twitter-cli requests will automatically route through the configured proxy.

Proxy Types Comparison

Proxy TypeDetection RiskCostSpeedRecommended
ResidentialVery LowHighMedium✅ Yes
MobileVery LowHighMedium✅ Yes
DatacenterHighLowFast❌ No
Free PublicVery HighFreeSlow❌ Never

Proxy Rotation Strategy

#!/bin/bash
# Rotate proxy every 50 requests
PROXIES=("socks5://proxy1:1080" "socks5://proxy2:1080" "socks5://proxy3:1080")

for i in {1..150}; do
  # Rotate proxy every 50 iterations
  PROXY_INDEX=$(( (i / 50) % ${#PROXIES[@]} ))
  export TWITTER_PROXY=${PROXIES[$PROXY_INDEX]}
  
  twitter search "topic" --max 20 --json >> results_${i}.json
  sleep $(( 30 + RANDOM % 60 ))  # Random delay 30-90s
done

Rate Limit Avoidance

Keep Request Volumes Low

# ❌ Bad: Aggressive scraping
twitter feed --max 500
twitter search "keyword" --max 1000

# ✅ Good: Conservative limits
twitter feed --max 20
twitter search "keyword" --max 50
Recommended limits:
  • Feed: 20-50 tweets per request
  • Search: 30-100 tweets per request
  • User posts: 20-50 tweets per request
  • Frequency: Maximum 2-3 requests per minute

Configure Rate Limit Settings

# config.yaml
rateLimit:
  requestDelay: 2.5        # Base delay between requests
  maxRetries: 3             # Retry attempts on rate limit (429)
  retryBaseDelay: 5.0       # Base delay for exponential backoff
  maxCount: 200             # Hard cap on fetched items
The client will automatically:
  • Add jitter to requestDelay (×0.7-1.5)
  • Exponentially back off on 429 errors
  • Stop at maxCount even if more data is available

Write Operation Delays

Write operations (post, like, retweet, etc.) automatically include random delays:
# Built into the client
import random
time.sleep(random.uniform(1.5, 4.0))  # 1.5-4 seconds
This mimics human behavior and reduces the risk of write rate limiting.

Avoid Frequent Startups

Each CLI startup fetches x.com to initialize anti-detection headers. Frequent startups can trigger rate limits.
# ❌ Bad: Separate CLI calls
for user in alice bob charlie; do
  twitter user-posts $user --json > ${user}.json
done

# ✅ Good: Batch processing in one session
twitter user-posts alice --json > alice.json
sleep 5
twitter user-posts bob --json > bob.json
sleep 5
twitter user-posts charlie --json > charlie.json
Or write a script that keeps one client instance alive.

Account Safety

Don’t Use Your Main Account

If you’re doing aggressive scraping or automation:
  • Create a dedicated throwaway account
  • Use it exclusively for API/CLI access
  • Never post personal content from it
  • If it gets banned, your main account is safe

Avoid Datacenter IPs

Twitter flags datacenter IP ranges aggressively:
# ❌ High risk: AWS, GCP, Azure, DigitalOcean
SSH into cloud server twitter feed

# ✅ Lower risk: Residential proxy or home IP
Local machine + residential proxy twitter feed
If you must use a cloud server, always route through a residential proxy.

Login Hygiene

  1. Always login from your real browser first before using twitter-cli
  2. Complete any security challenges (CAPTCHA, email verification) in the browser
  3. Wait 5-10 minutes after login before running CLI commands
  4. Don’t switch IPs rapidly between browser and CLI usage

Common Pitfalls

Your cookies have expired or your account is temporarily restricted.Solution:
  1. Re-login to x.com in your browser
  2. Complete any security challenges
  3. Wait 5 minutes
  4. Retry your twitter-cli command
If errors persist, your IP may be flagged. Try using a residential proxy.
You’ve hit Twitter’s rate limit for your IP or account.Solution:
  1. Reduce --max values (use 20-50 instead of 100+)
  2. Increase delays between requests in config.yaml
  3. Use a proxy to rotate IPs
  4. Wait 15-30 minutes before retrying
The client automatically retries with exponential backoff, but persistent 429s mean you need to slow down significantly.
Twitter’s GraphQL query IDs rotate periodically, causing the client’s hardcoded IDs to become invalid.Solution:
  1. Retry the command - the client attempts live queryId fallback
  2. Update twitter-cli to the latest version: uv tool upgrade twitter-cli
  3. If errors persist, file an issue at https://github.com/jackwener/twitter-cli/issues
This is usually temporary and resolves within hours as the client’s fallback mechanisms activate.
No. Never share raw cookie values.Cookies are authentication credentials that grant full access to your Twitter account. If you need help:
  • Share error messages (redact any tokens)
  • Share command syntax questions
  • Never paste auth_token or ct0 values in public forums
The browser extraction feature means you should never need to manually copy cookies anyway.
Use with caution. Twitter’s Terms of Service prohibit automated access without API keys.Safer use cases:
  • Personal research and data analysis
  • One-time data exports
  • Local development and testing
  • Accessing your own account data
Risky use cases:
  • High-volume scraping (1000+ tweets/hour)
  • Commercial data reselling
  • Automated posting at scale
  • Bypassing Twitter’s official API
If you need production-grade access, use Twitter’s official API.
Twitter may:
  • Temporarily lock your account (requires password reset)
  • Permanently suspend for repeated violations
  • Rate limit your IP (temporary, usually 15 minutes to 24 hours)
Prevention:
  1. Use a throwaway account for CLI access
  2. Never exceed 100-200 requests per hour
  3. Always use residential proxies for bulk operations
  4. Add 5-10 second delays between operations
Recovery:
  • For temporary locks: Reset password via email
  • For IP rate limits: Switch proxy or wait 24 hours
  • For permanent bans: Create new account, follow best practices

Configuration Example

Complete config.yaml for safe, production-grade usage:
fetch:
  count: 30                # Conservative default

filter:
  mode: "topN"
  topN: 20
  minScore: 50
  excludeRetweets: true     # Reduce noise
  weights:
    likes: 1.0
    retweets: 3.0
    replies: 2.0
    bookmarks: 5.0
    views_log: 0.5

rateLimit:
  requestDelay: 3.0         # 3 second base delay
  maxRetries: 3
  retryBaseDelay: 10.0      # 10s, 20s, 40s backoff
  maxCount: 150             # Hard cap to avoid runaway requests
Combine with proxy:
export TWITTER_PROXY=socks5://residential-proxy:1080
twitter feed --filter

Summary Checklist

  • ✅ Use browser cookie extraction (not manual env vars)
  • ✅ Configure a residential or mobile proxy
  • ✅ Keep --max values under 50 for most operations
  • ✅ Add 2-5 second delays between requests
  • ✅ Use a throwaway account for heavy automation
  • ✅ Avoid datacenter IPs (AWS, GCP, Azure, etc.)
  • ✅ Wait 5-10 minutes after browser login before CLI usage
  • ✅ Monitor for 429 errors and slow down immediately
  • ✅ Update twitter-cli regularly for latest anti-detection
  • ❌ Never share raw cookie values
  • ❌ Never scrape 500+ tweets in a single session
  • ❌ Never run CLI commands every few seconds in a loop

Build docs developers (and LLMs) love