Skip to main content

Overview

The Mention API returns large datasets in pages to optimize performance and reduce response sizes. This guide covers all pagination strategies, from basic cursor-based pagination to automatic iteration with iter_mentions().

Pagination Methods

The SDK provides two approaches to pagination:
  1. Manual Pagination: Explicit control using cursor parameter
  2. Automatic Iteration: Use iter_mentions() for seamless pagination

Understanding Pagination Responses

Paginated responses include metadata for navigating pages:
from mention import MentionClient

client = MentionClient(access_token="token")
response = client.get_mentions("account-id", "alert-id", limit=100)

# MentionsResponse structure
print(response.mentions)      # List of Mention objects (up to 100)
print(response.has_more)      # Boolean: more pages available?
print(response.links)         # Links object with pagination URLs
print(response.links.more)    # Cursor for next page
Response attributes:
  • mentions (list[Mention]): Current page of results
  • has_more (bool): Whether more results exist
  • links (Links): Navigation links for pagination
  • links.more (str | None): Cursor token for next page
The iter_mentions() method handles pagination automatically, yielding mentions one at a time:
from mention import MentionClient

client = MentionClient(access_token="token")

# Automatically iterate through all mentions
for mention in client.iter_mentions(
    "account-id",
    "alert-id",
    limit=100  # Page size (fetches 100 per request)
):
    print(f"{mention.title} - {mention.published_at}")
    # Process mention...

# The method handles:
# - Fetching pages automatically
# - Following pagination cursors
# - Stopping when no more results
iter_mentions() is available for both MentionClient (sync) and AsyncMentionClient (async). Note that the async version uses async for instead of regular for.

Benefits of iter_mentions()

  • Automatic pagination: No manual cursor management
  • Memory efficient: Yields items one at a time
  • Clean code: Simple iteration without boilerplate
  • Handles edge cases: Automatically stops when no more data

Manual Pagination

For fine-grained control, manually manage pagination with cursors:
from mention import MentionClient

client = MentionClient(access_token="token")

cursor = None
all_mentions = []

while True:
    # Fetch page with optional cursor
    response = client.get_mentions(
        "account-id",
        "alert-id",
        limit=100,
        cursor=cursor
    )
    
    # Process current page
    all_mentions.extend(response.mentions)
    print(f"Fetched {len(response.mentions)} mentions")
    
    # Check if more pages exist
    if not response.has_more or not response.links or not response.links.more:
        break
    
    # Update cursor for next page
    cursor = response.links.more

print(f"Total: {len(all_mentions)} mentions")

Pagination with Filters

Combine pagination with filters to narrow down results:
from mention import MentionClient
from datetime import datetime, timedelta

client = MentionClient(access_token="token")

# Filter mentions from last 7 days
since = datetime.now() - timedelta(days=7)

for mention in client.iter_mentions(
    "account-id",
    "alert-id",
    limit=100,
    not_before_date=since,     # Date filter
    tone="positive",           # Sentiment filter
    read=False                 # Unread only
):
    print(f"Positive unread mention: {mention.title}")
Available filters:
  • before_date (datetime | str): Mentions before this date
  • not_before_date (datetime | str): Mentions on or after this date
  • since_id (str): Mentions since this mention ID
  • source (str): Filter by source (twitter, facebook, etc.)
  • read (bool): Read/unread status
  • favorite (bool): Favorited mentions
  • tone (str): Sentiment (positive, negative, neutral)

Page Size Optimization

Choose page size based on your use case:
# Small pages (faster initial response, more requests)
for mention in client.iter_mentions(account_id, alert_id, limit=50):
    pass

# Large pages (fewer requests, slower per request)
for mention in client.iter_mentions(account_id, alert_id, limit=1000):
    pass

# Balanced approach (recommended)
for mention in client.iter_mentions(account_id, alert_id, limit=100):
    pass
Recommendations:
  • Interactive UI: 20-50 items per page
  • Background processing: 100-500 items per page
  • Batch exports: 500-1000 items per page (max: 1000)
The API enforces a maximum limit of 1000 mentions per request. Larger values are automatically capped.

Real-World Examples

Example 1: Export All Mentions to JSON

import json
from mention import MentionClient
from datetime import datetime

def export_mentions_to_json(account_id, alert_id, output_file):
    """Export all mentions to a JSON file"""
    client = MentionClient(access_token="token")
    
    mentions_data = []
    count = 0
    
    print("Fetching mentions...")
    for mention in client.iter_mentions(account_id, alert_id, limit=500):
        mentions_data.append({
            "id": mention.id,
            "title": mention.title,
            "description": mention.description,
            "url": mention.original_url,
            "published_at": mention.published_at.isoformat() if mention.published_at else None,
            "tone": mention.tone,
            "source": mention.source_name
        })
        count += 1
        if count % 100 == 0:
            print(f"Processed {count} mentions...")
    
    # Save to file
    with open(output_file, 'w', encoding='utf-8') as f:
        json.dump(mentions_data, f, indent=2, ensure_ascii=False)
    
    print(f"Exported {count} mentions to {output_file}")

# Usage
export_mentions_to_json("account-id", "alert-id", "mentions.json")

Example 2: Batch Processing with Progress

from mention import MentionClient
from tqdm import tqdm

def process_mentions_with_progress(account_id, alert_id):
    """Process mentions with progress bar"""
    client = MentionClient(access_token="token")
    
    # First, get total count (fetch first page)
    first_page = client.get_mentions(account_id, alert_id, limit=1)
    # Note: API doesn't return total count, so we estimate
    
    processed = 0
    unread_count = 0
    
    # Use tqdm for progress bar
    with tqdm(desc="Processing mentions", unit=" mentions") as pbar:
        for mention in client.iter_mentions(account_id, alert_id, limit=100):
            # Process mention
            if not mention.read:
                unread_count += 1
            
            processed += 1
            pbar.update(1)
    
    print(f"\nProcessed {processed} mentions ({unread_count} unread)")

process_mentions_with_progress("account-id", "alert-id")

Example 3: Parallel Pagination (Async)

import asyncio
from mention import AsyncMentionClient

async def process_alert_mentions(client, account_id, alert):
    """Process mentions for a single alert"""
    count = 0
    async for mention in client.iter_mentions(
        account_id,
        alert.id,
        limit=100
    ):
        # Process mention
        count += 1
    
    return alert.name, count

async def process_all_alerts_parallel(account_id):
    """Process mentions for all alerts in parallel"""
    async with AsyncMentionClient(access_token="token") as client:
        # Fetch all alerts
        alerts_response = await client.get_alerts(account_id)
        
        # Process all alerts concurrently
        tasks = [
            process_alert_mentions(client, account_id, alert)
            for alert in alerts_response.alerts
        ]
        
        results = await asyncio.gather(*tasks)
        
        # Display results
        for alert_name, count in results:
            print(f"{alert_name}: {count} mentions")

# Run
asyncio.run(process_all_alerts_parallel("account-id"))

Example 4: Stream Processing with Early Exit

from mention import MentionClient
from datetime import datetime

def find_first_negative_mention(account_id, alert_id):
    """Find the first negative mention and stop"""
    client = MentionClient(access_token="token")
    
    for mention in client.iter_mentions(
        account_id,
        alert_id,
        limit=100,
        tone="negative"  # Filter for negative only
    ):
        # Found first negative mention
        print(f"Found negative mention: {mention.title}")
        print(f"URL: {mention.original_url}")
        print(f"Published: {mention.published_at}")
        return mention
    
    print("No negative mentions found")
    return None

negative = find_first_negative_mention("account-id", "alert-id")

Example 5: Resumable Pagination

import json
from mention import MentionClient

CHECKPOINT_FILE = "pagination_checkpoint.json"

def save_checkpoint(cursor, processed_count):
    """Save pagination state"""
    with open(CHECKPOINT_FILE, 'w') as f:
        json.dump({"cursor": cursor, "processed": processed_count}, f)

def load_checkpoint():
    """Load pagination state"""
    try:
        with open(CHECKPOINT_FILE, 'r') as f:
            return json.load(f)
    except FileNotFoundError:
        return {"cursor": None, "processed": 0}

def resumable_processing(account_id, alert_id):
    """Process mentions with ability to resume"""
    client = MentionClient(access_token="token")
    
    # Load previous state
    checkpoint = load_checkpoint()
    cursor = checkpoint["cursor"]
    processed_count = checkpoint["processed"]
    
    print(f"Resuming from {processed_count} processed mentions")
    
    try:
        while True:
            response = client.get_mentions(
                account_id,
                alert_id,
                limit=100,
                cursor=cursor
            )
            
            for mention in response.mentions:
                # Process mention
                print(f"Processing: {mention.title}")
                processed_count += 1
                
                # Save checkpoint every 10 mentions
                if processed_count % 10 == 0:
                    save_checkpoint(cursor, processed_count)
            
            if not response.has_more:
                break
            
            cursor = response.links.more
    
    except KeyboardInterrupt:
        print(f"\nInterrupted. Progress saved at {processed_count} mentions.")
        save_checkpoint(cursor, processed_count)
    
    print(f"\nCompleted processing {processed_count} mentions")

resumable_processing("account-id", "alert-id")

Best Practices

Use automatic iteration unless you need fine control:
# ✅ Good: Simple and clean
for mention in client.iter_mentions(account_id, alert_id):
    process(mention)

# ❌ Unnecessary: Manual pagination for simple case
cursor = None
while True:
    response = client.get_mentions(account_id, alert_id, cursor=cursor)
    for mention in response.mentions:
        process(mention)
    if not response.has_more:
        break
    cursor = response.links.more
Apply filters to fetch only what you need:
# ✅ Good: Filter server-side
for mention in client.iter_mentions(
    account_id, alert_id,
    read=False,        # Only unread
    tone="negative"    # Only negative
):
    process(mention)

# ❌ Bad: Fetch everything then filter client-side
for mention in client.iter_mentions(account_id, alert_id):
    if not mention.read and mention.tone == "negative":
        process(mention)
Balance request frequency vs response size:
# For real-time UI: smaller pages
for mention in client.iter_mentions(account_id, alert_id, limit=20):
    display(mention)  # Update UI frequently

# For batch processing: larger pages
for mention in client.iter_mentions(account_id, alert_id, limit=1000):
    process_batch(mention)  # Minimize requests
Save progress for resumable processing:
# Implement checkpointing for long-running jobs
checkpoint = load_checkpoint()

try:
    for i, mention in enumerate(client.iter_mentions(...)):
        process(mention)
        if i % 100 == 0:
            save_checkpoint(i)
except KeyboardInterrupt:
    save_checkpoint(i)
    print("Progress saved, can resume later")

Performance Tips

Parallel Processing

Process multiple alerts concurrently with async:
import asyncio
from mention import AsyncMentionClient

async def process_mentions_concurrent(account_id, alert_ids):
    """Process multiple alerts in parallel"""
    async with AsyncMentionClient(access_token="token") as client:
        async def process_alert(alert_id):
            count = 0
            async for mention in client.iter_mentions(account_id, alert_id):
                count += 1
            return count
        
        # Process all alerts concurrently
        results = await asyncio.gather(*[
            process_alert(alert_id) for alert_id in alert_ids
        ])
        
        return dict(zip(alert_ids, results))

Memory-Efficient Processing

Process mentions without loading all into memory:
# ✅ Good: Memory-efficient streaming
for mention in client.iter_mentions(account_id, alert_id, limit=1000):
    process(mention)  # Process one at a time
    # Previous mentions are garbage collected

# ❌ Bad: Loads everything into memory
all_mentions = []
for mention in client.iter_mentions(account_id, alert_id):
    all_mentions.append(mention)  # Memory grows unbounded

Next Steps

API Reference

Complete API documentation

Mentions API

Learn more about mentions API

Build docs developers (and LLMs) love