Overview
The SocialAnalyzer class is the core Python API for Social Analyzer. It provides programmatic access to all username search and analysis functionality.
Installation
pip install social-analyzer
Class Import
from social_analyzer import SocialAnalyzer
Class Initialization
Constructor
class SocialAnalyzer ( silent = False )
Suppress all console output when True. Useful for library integration and background tasks.
Basic Initialization
# Default initialization
sa = SocialAnalyzer()
# Silent mode (no console output)
sa = SocialAnalyzer( silent = True )
Instance Attributes
After initialization, the class has the following attributes:
Core Attributes
List of all available website definitions loaded from sites.json. Each entry contains URL patterns, detection rules, and metadata. sa.websites_entries
# [{'url': 'https://youtube.com/{username}', 'selected': 'true', ...}, ...]
Shared detection patterns used across multiple websites sa.shared_detections
# [{'pattern': '...', 'type': '...', ...}, ...]
Generic detection rules applied when site-specific rules aren’t available sa.generic_detection
# [{'pattern': '...', 'confidence': '...', ...}, ...]
Configuration Attributes
Whether to suppress console output sa.silent = True # Enable silent mode
Number of concurrent worker threads for parallel website checking sa.workers = 30 # Increase parallelism
timeout
integer/None
default: "None"
Delay in seconds between requests. None means random delay (0.01-0.99s) sa.timeout = 2 # 2 second delay between requests
Web Application Firewall detection/bypass mode sa.waf = False # Disable WAF handling
HTTP headers sent with each request sa.headers = {
"User-Agent" : "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:86.0) Gecko/20100101 Firefox/86.0"
}
Logging Attributes
Python logging instance for the class sa.log.info( "Custom log message" )
Directory for log files (when logging is enabled) sa.logs_dir = "/path/to/logs"
Screenshots Attributes
screenshots
boolean/None
default: "None"
Whether to capture screenshots of detected profiles sa.screenshots = True
sa.screenshots_location = "./screenshots"
screenshots_location
string/None
default: "None"
Directory to save captured screenshots
Internal Attributes
Path to sites.json data file # Automatically set to: <package>/data/sites.json
Path to languages.json data file # Automatically set to: <package>/data/languages.json
Loaded language definitions
Raw sites data loaded from JSON
Regex Pattern Attributes
Regex for detecting captcha and error pages # Matches: 'captcha-info', 'Please enable cookies', 'Completing the CAPTCHA'
Regex for detecting error page titles # Matches: 'not found', 'blocked', 'attention required', 'cloudflare'
Regex for filtering out meta tags # Matches: 'regionsAllowed', 'width', 'height', 'color', 'charset', etc.
Regex for parsing top website numbers # Matches: 'top10', 'top50', 'top100', etc.
Usage Patterns
Basic Setup
from social_analyzer import SocialAnalyzer
# Create instance
sa = SocialAnalyzer()
# Run analysis
results = sa.run_as_object(
username = "johndoe" ,
websites = "all"
)
print (results)
Silent Mode for Libraries
# No console output, perfect for integration
sa = SocialAnalyzer( silent = True )
results = sa.run_as_object(
username = "johndoe" ,
output = "json" ,
silent = True
)
Custom Configuration
sa = SocialAnalyzer( silent = False )
# Customize settings
sa.workers = 25 # More concurrent requests
sa.timeout = 1 # 1 second delay between requests
sa.headers = {
"User-Agent" : "CustomBot/1.0" ,
"Accept-Language" : "en-US"
}
results = sa.run_as_object( username = "johndoe" )
With Screenshots
import os
sa = SocialAnalyzer()
sa.screenshots = True
sa.screenshots_location = "./screenshots"
# Create screenshots directory
os.makedirs(sa.screenshots_location, exist_ok = True )
results = sa.run_as_object(
username = "johndoe" ,
logs = True , # Required for screenshots
screenshots = True
)
List All Websites
sa = SocialAnalyzer()
# Initialize detection data
sa.init_logic()
# List all available websites
sa.list_all_websites()
Method Categories
The SocialAnalyzer class provides several categories of methods:
Execution Methods
run_as_object() - Main method for programmatic usage
run_as_cli() - Parse command-line arguments and execute
check_user_cli() - Core CLI logic execution
Search Methods
find_username_normal() - Main username search using ThreadPoolExecutor
fetch_url() - Check individual website for username
Initialization Methods
init_logic() - Load detection data and initialize
init_detections() - Initialize detection rules
load_file() - Load JSON data files
Utility Methods
list_all_websites() - Display all available websites
get_website() - Extract domain from URL
search_and_change() - Update website entry
top_websites() - Select top N websites
delete_keys() - Remove specified keys from object
clean_up_item() - Filter profile fields
get_language_by_guessing() - Detect language from text
get_language_by_parsing() - Detect language from HTML
check_errors() - Error checking decorator
Logging Methods
setup_logger() - Configure logging
See Python Methods for detailed documentation of each method.
Advanced Configuration
Custom Detection Rules
sa = SocialAnalyzer()
sa.init_logic()
# Modify detection rules
for site in sa.websites_entries:
if 'youtube' in site[ 'url' ]:
site[ 'selected' ] = 'true'
else :
site[ 'selected' ] = 'false'
# Now only YouTube will be checked
results = sa.find_username_normal({
"body" : {
"uuid" : "custom-task-id" ,
"string" : "johndoe" ,
"options" : "FindUserProfilesFast"
}
})
Error Handling
try :
sa = SocialAnalyzer( silent = True )
results = sa.run_as_object(
username = "johndoe" ,
websites = "all" ,
output = "json"
)
if 'detected' in results:
print ( f "Found { len (results[ 'detected' ]) } profiles" )
except Exception as e:
print ( f "Error: { e } " )
Thread Safety
The SocialAnalyzer class uses ThreadPoolExecutor internally but is not guaranteed to be thread-safe for external concurrent access. Create separate instances for each thread if needed.
import threading
def search_user ( username ):
sa = SocialAnalyzer( silent = True )
return sa.run_as_object( username = username)
# Create separate instances per thread
thread1 = threading.Thread( target = search_user, args = ( "user1" ,))
thread2 = threading.Thread( target = search_user, args = ( "user2" ,))
thread1.start()
thread2.start()
Worker Count
The default worker count is 15. Adjust based on your system and network:
# Conservative (slower, but stable)
sa.workers = 10
# Aggressive (faster, but may trigger rate limits)
sa.workers = 50
# Recommended for most use cases
sa.workers = 25
Timeout Settings
# Random delay (default, most polite)
sa.timeout = None
# Fixed delay (predictable timing)
sa.timeout = 1 # 1 second between requests
# No delay (fastest, may trigger blocks)
sa.timeout = 0
Next Steps
Python Methods Detailed documentation of all class methods
CLI Reference Command-line interface documentation
Output Formats Understanding result structures
Web Endpoints Express web API reference