Skip to main content
Using proxies with Twikit helps you distribute requests across multiple IP addresses, avoid rate limits, and maintain anonymity. This guide covers how to configure and use proxies effectively.

Basic proxy setup

You can configure a proxy when initializing the Client:
from twikit import Client

client = Client(
    language='en-US',
    proxy='http://proxy-server:8080'
)
The proxy parameter accepts a single proxy URL string. All HTTP requests made by the client will be routed through this proxy.

Supported proxy types

Twikit supports multiple proxy protocols through the underlying httpx library:

HTTP/HTTPS proxies

# HTTP proxy
client = Client(proxy='http://proxy.example.com:8080')

# HTTPS proxy
client = Client(proxy='https://proxy.example.com:8443')

SOCKS5 proxies

# SOCKS5 proxy
client = Client(proxy='socks5://proxy.example.com:1080')
To use SOCKS proxies, you need to install the httpx[socks] extra: pip install httpx[socks]

Proxy authentication

If your proxy requires authentication, include the credentials in the proxy URL:
from twikit import Client

client = Client(
    proxy='http://username:[email protected]:8080'
)
For SOCKS5 proxies with authentication:
client = Client(
    proxy='socks5://username:[email protected]:1080'
)
Be careful when hardcoding credentials. Use environment variables or a secrets management system for production applications.

Changing proxy at runtime

You can dynamically change the proxy after client initialization:
client = Client()

# Set initial proxy
client.proxy = 'http://proxy1.example.com:8080'

# Change to a different proxy
client.proxy = 'http://proxy2.example.com:8080'

# Check current proxy
print(client.proxy)  # http://proxy2.example.com:8080

# Remove proxy (direct connection)
client.proxy = None
The proxy property allows you to get or set the current proxy URL:
# Get current proxy
current_proxy = client.proxy

# Set new proxy
client.proxy = 'socks5://new-proxy:1080'

Rotating proxies

To avoid rate limits and distribute load, you can rotate through multiple proxies:
import asyncio
from itertools import cycle
from twikit import Client

PROXY_LIST = [
    'http://proxy1.example.com:8080',
    'http://proxy2.example.com:8080',
    'http://proxy3.example.com:8080',
]

proxy_pool = cycle(PROXY_LIST)

async def search_with_rotation(query):
    client = Client()
    
    for i in range(10):
        # Rotate to next proxy
        client.proxy = next(proxy_pool)
        print(f'Using proxy: {client.proxy}')
        
        try:
            tweets = await client.search_tweet(query, 'Latest', count=20)
            for tweet in tweets:
                print(tweet.text)
        except Exception as e:
            print(f'Error with proxy {client.proxy}: {e}')
            continue
        
        await asyncio.sleep(5)

asyncio.run(search_with_rotation('python'))
Consider implementing a proxy health check system to automatically skip failed or slow proxies.

Testing proxy configuration

You can verify your proxy configuration by checking the client’s proxy property:
import asyncio
from twikit import Client

async def test_proxy():
    client = Client(
        proxy='http://proxy.example.com:8080'
    )
    
    # Verify proxy is set
    print(f'Configured proxy: {client.proxy}')
    
    try:
        # Test with a simple request
        await client.login(
            auth_info_1='username',
            password='password'
        )
        print('Proxy connection successful!')
    except Exception as e:
        print(f'Proxy connection failed: {e}')

asyncio.run(test_proxy())

Advanced proxy patterns

Proxy pool with fallback

Implement a proxy pool with automatic fallback:
import asyncio
from twikit import Client

class ProxyPool:
    def __init__(self, proxies):
        self.proxies = proxies
        self.current_index = 0
        self.failed_proxies = set()
    
    def get_next_proxy(self):
        # Skip failed proxies
        attempts = 0
        while attempts < len(self.proxies):
            proxy = self.proxies[self.current_index]
            self.current_index = (self.current_index + 1) % len(self.proxies)
            
            if proxy not in self.failed_proxies:
                return proxy
            
            attempts += 1
        
        return None  # All proxies failed
    
    def mark_failed(self, proxy):
        self.failed_proxies.add(proxy)
    
    def reset_failures(self):
        self.failed_proxies.clear()

async def robust_search(query, proxy_pool):
    client = Client()
    max_retries = 3
    
    for attempt in range(max_retries):
        proxy = proxy_pool.get_next_proxy()
        if proxy is None:
            print('All proxies exhausted')
            proxy_pool.reset_failures()  # Reset and try again
            proxy = proxy_pool.get_next_proxy()
        
        client.proxy = proxy
        print(f'Attempt {attempt + 1} with proxy: {proxy}')
        
        try:
            result = await client.search_tweet(query, 'Latest')
            return result
        except Exception as e:
            print(f'Failed with proxy {proxy}: {e}')
            proxy_pool.mark_failed(proxy)
    
    raise Exception('Failed after all retry attempts')

# Usage
proxies = [
    'http://proxy1.example.com:8080',
    'http://proxy2.example.com:8080',
    'http://proxy3.example.com:8080',
]

pool = ProxyPool(proxies)
asyncio.run(robust_search('twikit', pool))

Per-request proxy configuration

While Twikit doesn’t support per-request proxy settings directly, you can change the proxy before each request:
async def multi_account_scraping(accounts, proxies):
    client = Client()
    results = {}
    
    for account, proxy in zip(accounts, proxies):
        # Assign proxy for this account
        client.proxy = proxy
        
        try:
            user = await client.get_user_by_screen_name(account)
            results[account] = user
        except Exception as e:
            results[account] = f'Error: {e}'
    
    return results

Combining proxies with CAPTCHA solving

When using both proxies and CAPTCHA solvers, Capsolver needs to know about your proxy:
from twikit import Capsolver, Client

proxy = 'http://proxy.example.com:8080'

solver = Capsolver(
    api_key='your_api_key',
    max_attempts=5
)

client = Client(
    proxy=proxy,
    captcha_solver=solver
)
The Capsolver integration automatically uses your configured proxy when solving CAPTCHAs. It selects the appropriate CAPTCHA task type:
  • FunCaptchaTaskProxyLess: When no proxy is configured
  • FunCaptchaTask: When a proxy is configured (more reliable)

Troubleshooting

Proxy connection timeout

If requests timeout, try increasing the httpx timeout:
import httpx
from twikit import Client

client = Client(
    proxy='http://slow-proxy.example.com:8080',
    timeout=30.0  # 30 second timeout
)

Proxy authentication failed

Ensure your credentials are properly URL-encoded:
from urllib.parse import quote

username = quote('[email protected]')
password = quote('p@ssw0rd!')
proxy = f'http://{username}:{password}@proxy.example.com:8080'

client = Client(proxy=proxy)

Proxy returns 407 error

This indicates proxy authentication is required but not provided or incorrect:
# Ensure format is correct
client = Client(proxy='http://username:password@proxy:8080')

Best practices

  1. Use residential proxies: They’re less likely to be flagged by Twitter
  2. Rotate proxies regularly: Don’t make too many requests from a single IP
  3. Monitor proxy health: Track success rates and response times
  4. Handle failures gracefully: Implement fallback mechanisms
  5. Respect rate limits: Proxies don’t eliminate rate limits, just distribute them
  6. Keep proxy lists updated: Remove dead or slow proxies regularly
Using proxies doesn’t make you completely anonymous. Twitter can still track you through cookies, authentication tokens, and behavior patterns.

Build docs developers (and LLMs) love