Event Batching & Queuing
MentiQ Analytics uses an intelligent event queuing and batching system to optimize network requests, handle failures gracefully, and ensure reliable event delivery even in poor network conditions.
Overview
Instead of sending every event immediately, MentiQ queues events locally and sends them in batches. This approach:
Reduces network overhead - Fewer HTTP requests
Improves performance - Non-blocking event tracking
Handles offline scenarios - Events queued until connection restored
Provides retry logic - Automatic retry with exponential backoff
Prevents data loss - Events stored until successfully sent
How It Works
Event Creation
When you call track(), page(), or other tracking methods, events are created and added to an in-memory queue. analytics . track ( "button_clicked" , { button_id: "cta" });
// Event added to queue, function returns immediately
Queue Storage
Events are stored as QueuedEvent objects with metadata: interface QueuedEvent {
event : AnalyticsEvent ; // The actual event data
retries : number ; // Number of retry attempts
timestamp : number ; // When event was created
}
Automatic Flushing
Events are automatically sent when either:
Queue reaches batchSize (default: 20 events)
flushInterval timer expires (default: 10 seconds)
// Configuration
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
batchSize: 25 , // Send after 25 events
flushInterval: 5000 , // Or every 5 seconds
});
Batch Processing
Events are split into batches and sent to the backend endpoint: POST /api/v1/events/batch
Authorization: ApiKey your-api-key
X-Project-ID: your-project-id
[
{ event_type: "page_view", timestamp: "...", ... },
{ event_type: "click", timestamp: "...", ... },
...
]
Retry on Failure
If batch fails, events are re-queued with exponential backoff:
1st retry: Wait 1 second (retryDelay)
2nd retry: Wait 2 seconds (retryDelay × 2)
3rd retry: Wait 4 seconds (retryDelay × 4)
After 3 attempts: Events discarded (configurable)
Configuration Options
Batch Size
Number of events to accumulate before sending:
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
batchSize: 20 , // Default: 20 events
});
When to adjust batch size
Increase batch size (30-50) for:
High-traffic applications
Reducing network requests
Cost optimization
Decrease batch size (5-10) for:
Real-time analytics requirements
Low-traffic applications
Testing and development
Flush Interval
Maximum time to wait before sending queued events:
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
flushInterval: 10000 , // Default: 10 seconds (in milliseconds)
});
Events are sent when either batch size is reached or flush interval expires, whichever comes first.
Max Queue Size
Maximum number of events to keep in queue:
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
maxQueueSize: 1000 , // Default: 1000 events
});
When queue is full, oldest events are removed to make room for new ones. Increase this value for offline-first applications.
Retry Configuration
Control retry behavior for failed requests:
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
retryAttempts: 3 , // Default: 3 attempts
retryDelay: 1000 , // Default: 1000ms initial delay
});
Retry delays use exponential backoff:
Attempt 1: retryDelay × 2^0 = 1000ms
Attempt 2: retryDelay × 2^1 = 2000ms
Attempt 3: retryDelay × 2^2 = 4000ms
Manual Flushing
You can manually trigger event sending:
import { useAnalytics } from "mentiq-sdk" ;
function CheckoutButton () {
const { track , flush } = useAnalytics ();
const handleCheckout = async () => {
track ( "checkout_started" , { cart_total: 99.99 });
// Ensure event is sent before navigation
await flush ();
router . push ( "/checkout" );
};
return < button onClick ={ handleCheckout }> Checkout </ button > ;
}
Always flush before:
Page navigation that might unload the analytics instance
Critical events that must be sent immediately
Application shutdown or cleanup
Flush on Unload
Automatically flush when user leaves the page:
import { useEffect } from "react" ;
import { useAnalytics } from "mentiq-sdk" ;
function AnalyticsFlushProvider ({ children }) {
const { flush } = useAnalytics ();
useEffect (() => {
const handleBeforeUnload = () => {
flush (). catch ( console . error );
};
window . addEventListener ( "beforeunload" , handleBeforeUnload );
return () => {
window . removeEventListener ( "beforeunload" , handleBeforeUnload );
flush (); // Flush on component unmount
};
}, [ flush ]);
return <>{ children } </> ;
}
Queue Management
Check Queue Size
Monitor how many events are queued:
import { useAnalytics } from "mentiq-sdk" ;
function QueueMonitor () {
const { getQueueSize } = useAnalytics ();
const checkQueue = () => {
const size = getQueueSize ();
console . log ( ` ${ size } events in queue` );
if ( size > 500 ) {
console . warn ( "Queue is getting large, consider flushing" );
}
};
return < button onClick ={ checkQueue }> Check Queue </ button > ;
}
Clear Queue
Remove all queued events (use with caution):
const { clearQueue } = useAnalytics ();
// Clear all pending events
clearQueue ();
clearQueue() permanently discards all queued events. Only use this for:
User logout/reset scenarios
Testing and development
Error recovery after configuration changes
Backend Batch Endpoint
Events are sent to the batch endpoint in this format:
POST /api/v1/events/batch
Content-Type : application/json
Authorization : ApiKey pk_live_1234567890
X-Project-ID : proj_abc123
[
{
"event_id" : "evt_abc123" ,
"event_type" : "page_view" ,
"user_id" : "user_123" ,
"session_id" : "sess_456" ,
"timestamp" : "2024-03-15T10:30:00.000Z" ,
"properties" : {
"page" : "{ \" url \" : \" /products \" , \" title \" : \" Products \" }" ,
"screen" : "{ \" width \" :1920, \" height \" :1080}"
}
},
{
"event_id" : "evt_def456" ,
"event_type" : "click" ,
"user_id" : "user_123" ,
"session_id" : "sess_456" ,
"timestamp" : "2024-03-15T10:30:05.000Z" ,
"properties" : {
"element" : "button" ,
"button_id" : "add-to-cart"
}
}
]
Success (200 OK):
{
"success" : true ,
"processed" : 2
}
Error (4xx/5xx):
{
"error" : "Invalid API key" ,
"code" : "AUTH_ERROR"
}
Before sending, events are transformed from the internal format to backend-compatible format:
interface AnalyticsEvent {
id : string ;
timestamp : number ; // Unix timestamp in ms
type : "track" | "page" | "identify" | ...;
event ?: string ; // Event name for track events
properties ?: EventProperties ;
userId ?: string ;
anonymousId : string ;
sessionId : string ;
context : {
page ?: PageProperties ;
userAgent ?: string ;
screen ?: { width : number ; height : number };
// ... more context
};
}
interface BackendEvent {
event_id ?: string ;
event_type : string ; // Mapped from type + event
user_id ?: string ;
session_id ?: string ;
timestamp ?: string ; // ISO 8601 format
properties ?: EventProperties ; // Flattened with context
user_agent ?: string ;
ip_address ?: string ; // Auto-extracted by backend
}
Event Type Mapping
The SDK automatically maps event types:
Track Events
Page Events
Identify Events
// SDK call
analytics . track ( "button_clicked" , { button_id: "cta" });
// Sent as
{
event_type : "button_clicked" ,
properties : { button_id : "cta" }
}
Recommended Settings by Traffic Volume
High Traffic (>10k events/day)
Medium Traffic (1k-10k events/day)
Low Traffic (<1k events/day)
Real-Time Analytics
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
batchSize: 50 , // Larger batches
flushInterval: 5000 , // 5 seconds
maxQueueSize: 2000 , // Higher queue limit
retryAttempts: 5 , // More retries
});
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
batchSize: 20 , // Default batch size
flushInterval: 10000 , // 10 seconds
maxQueueSize: 1000 , // Default queue size
retryAttempts: 3 , // Standard retries
});
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
batchSize: 10 , // Smaller batches
flushInterval: 15000 , // 15 seconds
maxQueueSize: 500 , // Lower queue size
retryAttempts: 3 , // Standard retries
});
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
batchSize: 1 , // Send immediately
flushInterval: 1000 , // 1 second backup
maxQueueSize: 100 , // Small queue
retryAttempts: 2 , // Quick retries
});
Flush on Critical Events
For important events that must be sent immediately:
function CriticalEventTracker () {
const { track , flush } = useAnalytics ();
const trackPurchase = async ( orderId : string , amount : number ) => {
// Track purchase
track ( "purchase_completed" , {
order_id: orderId ,
amount ,
});
// Flush immediately for critical revenue events
await flush ();
};
return null ;
}
Debugging
Enable debug mode to see batching behavior:
const analytics = new Analytics ({
apiKey: "your-api-key" ,
projectId: "your-project-id" ,
debug: true , // Enable console logging
});
You’ll see logs like:
MentiQ Analytics event queued: { type: "track", event: "button_clicked", ... }
MentiQ Analytics flushed 20 events
MentiQ Analytics batch send error: Network request failed
Error Handling
The SDK handles errors gracefully:
Network Failures
// Events automatically retried with exponential backoff
analytics . track ( "event" , { data: "value" });
// If network fails, event is re-queued and retried
API Errors
// 4xx errors (client errors) are logged but not retried
// 5xx errors (server errors) are retried
analytics . flush (). catch (( error ) => {
console . error ( "Failed to flush events:" , error );
// Events remain in queue for next flush attempt
});
Best Practices
Configure appropriate batch sizes based on your traffic volume
Flush before navigation to prevent event loss
Monitor queue size in high-traffic applications
Use manual flush sparingly to avoid performance impact
Enable debug mode during development to understand behavior
Handle flush promises for critical events
Don’t set batchSize: 1 unless you need real-time analytics. This creates one HTTP request per event and can impact performance.
Next Steps
TypeScript Types Learn about QueuedEvent and batching types
Privacy Compliance Implement privacy-safe event tracking