Overview
The analytics system automatically:- Analyzes conversations after agent runs complete
- Extracts sentiment and frustration signals
- Classifies use cases to understand what users are doing
- Detects feature requests from user feedback
- Calculates engagement metrics using RFM analysis
How Analytics Works
Analytics processing happens asynchronously in the background:Conversation Analysis
The AI analyzes conversations across multiple dimensions:Sentiment Analysis
Classifies overall conversation sentiment:- Positive: User satisfied, task successful
- Neutral: Informational, no strong emotion
- Negative: User frustrated or disappointed
- Mixed: Combination of positive and negative
Frustration Detection
Identifies and scores user frustration (0.0 to 1.0):Intent Classification
Categorizes the user’s primary intent:- Question: User asking for information
- Task: User requesting the agent to do something
- Complaint: User expressing dissatisfaction
- Feature request: User suggesting improvements
- Chat: Casual conversation
Use Case Detection
Identifies what the user is trying to accomplish:Feature Request Detection
Identifies when users are requesting new features or improvements:Analysis Prompt
The AI receives context and instructions for analysis:Message Context
The analyzer includes context from previous messages:RFM Engagement Scoring
Kortix uses RFM (Recency, Frequency, Monetary) analysis to measure user engagement:RFM Dimensions
Recency: Days since last agent run (1-5 score)- Score 5: Last activity ≤ 1 day ago
- Score 4: 1-3 days ago
- Score 3: 3-7 days ago
- Score 2: 7-14 days ago
- Score 1: > 14 days ago
- Score 5: ≥ 20 runs
- Score 4: 10-19 runs
- Score 3: 5-9 runs
- Score 2: 2-4 runs
- Score 1: 0-1 runs
- Score 5: ≥ 100 conversations
- Score 4: 50-99 conversations
- Score 3: 20-49 conversations
- Score 2: 5-19 conversations
- Score 1: < 5 conversations
User Segments
Based on RFM scores, users are categorized:- Champion: High recency + high frequency (R≥4, F≥4)
- Loyal: High overall RFM (sum ≥ 12)
- At Risk: Low recency, high frequency (R≤2, F≥4)
- Hibernating: Low recency + low frequency (R≤2, F≤2)
- New User: High recency, low frequency (R≥4, F≤2)
- Potential: Moderate engagement (sum 9-11)
- Needs Attention: Below average (sum 6-8)
Calculate RFM Score
Analysis Results
After analysis, results are stored in the database:Queuing System
Conversations are queued for analysis to avoid blocking agent execution:Background Worker
A separate worker process handles queued analysis jobs:Use Case Clustering
The system can cluster similar use cases to identify patterns:Best Practices
Monitor frustration trends
Monitor frustration trends
Track frustration scores over time to identify issues with agent performance. High frustration may indicate:
- Poorly configured agent instructions
- Missing tools or capabilities
- Complex tasks that need simplification
Review feature requests
Review feature requests
Regularly check conversations flagged as feature requests. User feedback is valuable for improving your agents and platform.
Analyze use case distribution
Analyze use case distribution
Understanding what users actually do with your agents helps prioritize development and optimization efforts.
Track engagement metrics
Track engagement metrics
Use RFM scores to identify:
- Champions worth engaging with for feedback
- At-risk users who may need help
- Hibernating users to re-engage
Privacy Considerations
- Analytics extracts insights, not raw conversation content
- Conversation text is not stored in analytics tables
- Users can opt out of analytics at the account level
- All data is encrypted at rest and in transit
Troubleshooting
Analytics Not Generated
- Check queue: Verify conversations are being queued
- Check worker: Ensure the analytics worker is running
- Review logs: Look for errors in the worker logs
- Verify settings: Ensure analytics is enabled globally
Inaccurate Sentiment
- Context matters: The AI needs enough conversation context
- Check message count: Very short conversations may not generate meaningful sentiment
- Review frustration signals: Compare AI assessment with actual conversation
Missing Use Cases
- Verify categorization: Check if the AI chose an existing category or created a new one
- Check “is_useful” flag: Casual conversations are marked as not useful
- Review categories: Ensure default categories match your use cases
API Reference
While analytics are primarily internal, you can access insights through:- Database queries: Query
conversation_analyticstable - Custom endpoints: Build reporting endpoints as needed
- RFM API: Use the RFM calculation function in your code