Skip to main content

Overview

twitter-cli supports exporting all data to JSON format for scripting, data analysis, and offline workflows. You can also load previously exported JSON files back into the CLI for filtering and display.

Exporting to JSON

Using the --json Flag

Add --json to any read command to output structured JSON instead of the default terminal format:
# Export feed to stdout
twitter feed --json

# Export search results
twitter search "AI agents" --json

# Export user posts
twitter user-posts elonmusk --json

# Export bookmarks
twitter favorites --json

# Export user profile
twitter user elonmusk --json

Saving to File with --output / -o

Use the --output (or -o) flag to save JSON directly to a file:
# Save search results
twitter search "machine learning" -o results.json

# Save user tweets
twitter user-posts elonmusk -o elonmusk_tweets.json

# Save likes
twitter likes elonmusk -o likes.json

# Save feed
twitter feed --max 100 -o feed.json

Redirecting Output

You can also use shell redirection with --json:
twitter feed --json > tweets.json
twitter search "keyword" --json > search_results.json
twitter user-posts username --json > user_tweets.json

Importing from JSON

Using the --input Flag

Load previously exported JSON files to re-display or re-filter them:
# Load and display tweets from JSON
twitter feed --input tweets.json

# Load and apply ranking filter
twitter feed --input tweets.json --filter

# Load and limit output
twitter feed --input tweets.json --max 10
This is useful for:
  • Offline browsing of previously fetched data
  • Re-applying filters without making new API calls
  • Sharing datasets with teammates
  • Testing filter configurations

JSON Schema

Tweet Object

Tweets are serialized using the tweet_to_dict() function in serialization.py:
{
  "id": "1234567890",
  "text": "This is a tweet",
  "author": {
    "id": "123456",
    "name": "John Doe",
    "screenName": "johndoe",
    "profileImageUrl": "https://...",
    "verified": false
  },
  "metrics": {
    "likes": 42,
    "retweets": 10,
    "replies": 5,
    "quotes": 2,
    "views": 1000,
    "bookmarks": 3
  },
  "createdAt": "2024-01-15T10:30:00.000Z",
  "media": [
    {
      "type": "photo",
      "url": "https://...",
      "width": 1200,
      "height": 675
    }
  ],
  "urls": ["https://example.com"],
  "isRetweet": false,
  "retweetedBy": null,
  "lang": "en",
  "score": 127.5,
  "quotedTweet": {
    "id": "9876543210",
    "text": "Original tweet",
    "author": {
      "screenName": "originaluser",
      "name": "Original User"
    }
  }
}

User Profile Object

User profiles are serialized using user_profile_to_dict():
{
  "id": "123456",
  "name": "John Doe",
  "screenName": "johndoe",
  "bio": "Software engineer and AI enthusiast",
  "location": "San Francisco, CA",
  "url": "https://example.com",
  "followers": 1000,
  "following": 500,
  "tweets": 2500,
  "likes": 10000,
  "verified": false,
  "profileImageUrl": "https://...",
  "createdAt": "2015-03-15T00:00:00.000Z"
}

Scripting Examples

Using jq for Processing

# Extract just the tweet text
twitter feed --json | jq '.[].text'

# Count total tweets
twitter search "keyword" --json | jq 'length'

# Get tweets with more than 100 likes
twitter feed --json | jq '[.[] | select(.metrics.likes > 100)][]

# Extract author screen names
twitter search "AI" --json | jq '.[].author.screenName' | sort -u

# Get average likes
twitter feed --json | jq '[.[].metrics.likes] | add / length'

Python Processing

import json
import subprocess

# Fetch tweets as JSON
result = subprocess.run(
    ["twitter", "feed", "--json", "--max", "50"],
    capture_output=True,
    text=True
)

tweets = json.loads(result.stdout)

# Analyze engagement
for tweet in tweets:
    metrics = tweet["metrics"]
    engagement = metrics["likes"] + metrics["retweets"] + metrics["replies"]
    print(f"{tweet['author']['screenName']}: {engagement} total engagement")

Data Pipeline

# Daily backup workflow
#!/bin/bash
DATE=$(date +%Y-%m-%d)

# Backup feed
twitter feed --max 200 --json > backups/feed_${DATE}.json

# Backup bookmarks
twitter favorites --json > backups/bookmarks_${DATE}.json

# Extract high-engagement tweets
cat backups/feed_${DATE}.json | \
  jq '[.[] | select(.metrics.likes > 50)]' > \
  backups/feed_${DATE}_filtered.json

Implementation Details

The JSON serialization is handled by twitter_cli/serialization.py:
  • tweets_to_json() - Serializes list of Tweet objects to pretty JSON
  • tweets_from_json() - Deserializes JSON string back to Tweet objects
  • tweet_to_dict() - Converts single Tweet to dictionary
  • tweet_from_dict() - Converts dictionary to Tweet object
  • users_to_json() - Serializes user profiles to JSON
  • user_profile_to_dict() - Converts UserProfile to dictionary
All JSON output uses:
  • ensure_ascii=False to preserve Unicode characters
  • indent=2 for readable formatting
  • Consistent camelCase field names for JavaScript interoperability

Best Practices

  1. Use -o for large exports - More reliable than shell redirection for large datasets
  2. Save raw data first - Export to JSON before applying filters so you can re-filter later
  3. Version your exports - Include timestamps in filenames for tracking
  4. Validate JSON - Use jq or Python to validate exported files before processing
  5. Compress archives - Use gzip for long-term storage: gzip feed_2024-01-15.json

Build docs developers (and LLMs) love