Overview
The Mention API SDK provides both synchronous and asynchronous clients to accommodate different use cases and programming paradigms. This guide helps you understand when to use each and how to work with them effectively.
Quick Comparison
Feature MentionClient (Sync) AsyncMentionClient (Async) Use Case Simple scripts, CLI tools High-concurrency applications Performance One request at a time Multiple concurrent requests Complexity Simple, straightforward Requires async/await knowledge Best For Quick tasks, sequential operations Web servers, batch processing Python Version 3.8+ 3.8+ with asyncio support
When to Use Sync Client
The MentionClient is ideal for:
CLI Tools : Command-line scripts and utilities
Simple Scripts : One-off data retrieval or processing tasks
Sequential Operations : When operations must happen one after another
Learning : When getting started with the API
Low Volume : Processing small numbers of requests
Sync Client Example
from mention import MentionClient, MentionConfig
def process_alerts ():
config = MentionConfig.from_env()
with MentionClient.from_config(config) as client:
# Get account information
account = client.get_account_me()
print ( f "Processing alerts for: { account.name } " )
# Fetch alerts
alerts_response = client.get_alerts(account.id)
# Process each alert sequentially
for alert in alerts_response.alerts:
print ( f " \n Alert: { alert.name } " )
# Get mentions for this alert
mentions = client.get_mentions(
account.id,
alert.id,
limit = 10
)
print ( f "Found { len (mentions.mentions) } mentions" )
if __name__ == "__main__" :
process_alerts()
When to Use Async Client
The AsyncMentionClient excels at:
Web Applications : FastAPI, Sanic, or async web frameworks
High Concurrency : Processing many requests simultaneously
I/O-Bound Operations : When waiting for API responses is the bottleneck
Real-Time Processing : Streaming or event-driven applications
Batch Operations : Processing multiple accounts/alerts in parallel
Async Client Example
import asyncio
from mention import AsyncMentionClient, MentionConfig
async def process_alerts_async ():
config = MentionConfig.from_env()
async with AsyncMentionClient.from_config(config) as client:
# Get account information
account = await client.get_account_me()
print ( f "Processing alerts for: { account.name } " )
# Fetch alerts
alerts_response = await client.get_alerts(account.id)
# Process alerts concurrently
tasks = [
process_alert(client, account.id, alert)
for alert in alerts_response.alerts
]
await asyncio.gather( * tasks)
async def process_alert ( client , account_id , alert ):
"""Process a single alert"""
print ( f " \n Alert: { alert.name } " )
mentions = await client.get_mentions(
account_id,
alert.id,
limit = 10
)
print ( f "Found { len (mentions.mentions) } mentions" )
if __name__ == "__main__" :
asyncio.run(process_alerts_async())
API Compatibility
Both clients provide identical method signatures with the same parameters and return types. The only difference is that async methods must be awaited.
from mention import MentionClient
client = MentionClient( access_token = "token" )
# Direct method calls
account = client.get_account_me()
alerts = client.get_alerts(account.id)
mentions = client.get_mentions(account.id, alerts.alerts[ 0 ].id)
import asyncio
from mention import AsyncMentionClient
async def main ():
client = AsyncMentionClient( access_token = "token" )
# Await all method calls
account = await client.get_account_me()
alerts = await client.get_alerts(account.id)
mentions = await client.get_mentions(account.id, alerts.alerts[ 0 ].id)
await client.close()
asyncio.run(main())
Sequential Processing (Sync)
Processing 10 alerts sequentially takes approximately 10 seconds (1 second per request):
from mention import MentionClient
import time
def sync_processing ():
client = MentionClient( access_token = "token" )
account = client.get_account_me()
start = time.time()
for i in range ( 10 ):
# Each request takes ~1 second
alerts = client.get_alerts(account.id)
elapsed = time.time() - start
print ( f "Sync: { elapsed :.2f} seconds" ) # ~10 seconds
Concurrent Processing (Async)
Processing 10 alerts concurrently takes approximately 1 second (all requests in parallel):
import asyncio
from mention import AsyncMentionClient
import time
async def async_processing ():
client = AsyncMentionClient( access_token = "token" )
account = await client.get_account_me()
start = time.time()
# Execute all 10 requests concurrently
tasks = [client.get_alerts(account.id) for _ in range ( 10 )]
await asyncio.gather( * tasks)
elapsed = time.time() - start
print ( f "Async: { elapsed :.2f} seconds" ) # ~1 second
await client.close()
asyncio.run(async_processing())
Async provides up to 10x performance improvement when processing multiple concurrent requests.
Real-World Use Cases
A command-line tool for exporting mentions:
import click
from mention import MentionClient, MentionConfig
@click.command ()
@click.option ( '--alert-id' , required = True , help = 'Alert ID to export' )
@click.option ( '--output' , default = 'mentions.json' , help = 'Output file' )
def export_mentions ( alert_id , output ):
"""Export mentions to JSON file"""
config = MentionConfig.from_env()
with MentionClient.from_config(config) as client:
account = client.get_account_me()
# Collect all mentions using pagination
all_mentions = []
for mention in client.iter_mentions(account.id, alert_id, limit = 100 ):
all_mentions.append(mention.model_dump())
# Save to file
import json
with open (output, 'w' ) as f:
json.dump(all_mentions, f, indent = 2 )
print ( f "Exported { len (all_mentions) } mentions to { output } " )
if __name__ == '__main__' :
export_mentions()
Use Case 2: FastAPI Integration (Async)
A web API endpoint using async client:
from fastapi import FastAPI, Depends
from mention import AsyncMentionClient, MentionConfig
app = FastAPI()
config = MentionConfig.from_env()
async def get_client ():
"""Dependency injection for client"""
client = AsyncMentionClient.from_config(config)
try :
yield client
finally :
await client.close()
@app.get ( "/alerts/ {account_id} " )
async def list_alerts ( account_id : str , client : AsyncMentionClient = Depends(get_client)):
"""List all alerts for an account"""
alerts = await client.get_alerts(account_id)
return { "alerts" : [alert.model_dump() for alert in alerts.alerts]}
@app.get ( "/mentions/ {account_id} / {alert_id} " )
async def list_mentions ( account_id : str , alert_id : str , client : AsyncMentionClient = Depends(get_client)):
"""List mentions for an alert"""
mentions = await client.get_mentions(account_id, alert_id, limit = 50 )
return { "mentions" : [m.model_dump() for m in mentions.mentions]}
Use Case 3: Batch Processing (Async)
Process multiple accounts concurrently:
import asyncio
from mention import AsyncMentionClient, MentionConfig
async def process_account ( client , account_id ):
"""Process all alerts for a single account"""
alerts = await client.get_alerts(account_id)
total_mentions = 0
for alert in alerts.alerts:
mentions = await client.get_mentions(account_id, alert.id, limit = 10 )
total_mentions += len (mentions.mentions)
return account_id, total_mentions
async def batch_process_accounts ( account_ids ):
"""Process multiple accounts concurrently"""
config = MentionConfig.from_env()
async with AsyncMentionClient.from_config(config) as client:
# Process all accounts in parallel
tasks = [process_account(client, aid) for aid in account_ids]
results = await asyncio.gather( * tasks)
# Display results
for account_id, count in results:
print ( f "Account { account_id } : { count } mentions" )
# Process 5 accounts concurrently
account_ids = [ "acc1" , "acc2" , "acc3" , "acc4" , "acc5" ]
asyncio.run(batch_process_accounts(account_ids))
Migration Guide
Converting from sync to async is straightforward:
Change Client Import
Replace MentionClient with AsyncMentionClient: # Before
from mention import MentionClient
# After
from mention import AsyncMentionClient
Add async/await Keywords
Make your function async and await all client calls: # Before
def get_alerts ():
client = MentionClient( access_token = "token" )
return client.get_alerts( "account-id" )
# After
async def get_alerts ():
client = AsyncMentionClient( access_token = "token" )
return await client.get_alerts( "account-id" )
Use async Context Managers
Replace with with async with: # Before
with MentionClient( access_token = "token" ) as client:
alerts = client.get_alerts( "account-id" )
# After
async with AsyncMentionClient( access_token = "token" ) as client:
alerts = await client.get_alerts( "account-id" )
Run with asyncio
Execute async functions with asyncio.run(): import asyncio
asyncio.run(get_alerts())
Best Practices
Avoid calling sync methods from async code or vice versa. Choose one paradigm and stick with it. # ❌ Bad: Mixing sync and async
async def bad_example ():
sync_client = MentionClient( access_token = "token" )
result = sync_client.get_alerts( "account-id" ) # Blocks async event loop
# ✅ Good: Use async consistently
async def good_example ():
async_client = AsyncMentionClient( access_token = "token" )
result = await async_client.get_alerts( "account-id" )
Both clients maintain HTTP connection pools. Reuse client instances instead of creating new ones for each request: # ❌ Bad: Creating new client per request
async def bad_example ():
for account_id in account_ids:
client = AsyncMentionClient( access_token = "token" )
await client.get_alerts(account_id)
await client.close()
# ✅ Good: Reuse single client
async def good_example ():
async with AsyncMentionClient( access_token = "token" ) as client:
for account_id in account_ids:
await client.get_alerts(account_id)
When processing many items concurrently, limit parallelism to avoid overwhelming the API: import asyncio
from mention import AsyncMentionClient
async def process_with_limit ( client , items , max_concurrent = 5 ):
semaphore = asyncio.Semaphore(max_concurrent)
async def bounded_task ( item ):
async with semaphore:
return await process_item(client, item)
return await asyncio.gather( * [bounded_task(item) for item in items])
Next Steps
Error Handling Learn how to handle exceptions in both sync and async clients
Pagination Work with paginated responses efficiently