Overview
Interview sessions are the core of the platform’s practice experience. Each session is tracked in theInterviewSession model, which records questions, answers, scores, feedback, and performance metrics.
Session Types
The platform supports multiple session types identified by thesession_type field:
| Session Type | Description | Use Case |
|---|---|---|
mock | General mock interviews | Behavioral and situational questions |
mock_resume | Resume-based mock interviews | Questions derived from your specific experience |
technical | Technical knowledge interviews | Deep-dive into technical concepts |
coding | Data science coding challenges | Coding problems and algorithm questions |
hr | HR screening interviews | Culture fit and soft skills |
agentic | AI-powered adaptive interviews | Dynamic question generation based on performance |
debugging | Debugging challenges | Code debugging and problem-solving |
Interview Session Model
Each session stores the following data:Starting a Session
Choose session type
Select the type of interview you want to practice:Mock Interview (Resume-based):Requires uploaded resume. Generates questions based on your experience.Technical Interview:Coding Interview:Each endpoint returns a set of questions tailored to the session type.
Initialize session
When you start a live interview (with voice), the system:Response:
- Creates a new InterviewSession record
- Initializes speech tracking metrics
- Starts AssemblyAI streaming (or mock streamer)
- Begins recording session start time
Session initialization details
The system creates session tracking objects:
- Session Key:
{user_id}_{session_type} - Speech Metrics: Tracks speaking time, pauses, fluency
- Running Statistics: Research-grade metrics for analysis
- Silence Detection: Monitors long pauses (>5 seconds)
app.py:1218-1301During the Session
Answering Questions
The interview flow depends on the session type:- Live Voice Interview
- Text-based Interview
- Coding Session
Real-time Speech Processing:The system tracks:
- Speak your answer: Audio is streamed to AssemblyAI
- Transcription: Your speech is converted to text in real-time
- Metrics tracking: Speaking time, pauses, and fluency monitored
- Question completion: System detects when you finish answering
- Speech start/end times
- Silence periods
- Total words spoken
- Response latency
Real-time Feedback
During the session, you receive: Immediate Evaluation:- Semantic similarity score (0-1)
- Keyword coverage percentage
- Specific improvement suggestions
- Concept mastery updates
- Speaking ratio (speaking time / total time)
- Words per minute (WPM)
- Long pause count
- Fluency score
- Average response latency
app.py:1000-1179
Running Statistics
TheRunningStatistics class tracks:
Completing the Session
Compute final metrics
The system performs comprehensive analysis:Research-grade Metrics:
- Session duration (total time)
- Effective duration (excluding forced silence)
- Speaking vs. silence ratio
- Words per minute (WPM)
- Average semantic similarity
- Fluency score (0-1)
- Long pause analysis
compute_research_metrics() in app.py:1053-1055Example Output:Reviewing Sessions
View Session History
Retrieve your past interview sessions:type: Session type (mock, technical, coding, etc.)- Automatically sorted by most recent first
app.py:3170-3186
Session Details
Get full details of a specific session including:- Complete list of questions
- Your answers and transcripts
- Detailed feedback
- All speech metrics
- Concept mastery updates
Performance Analytics
Session data feeds into: UserMastery Updates:- Mastery level per topic
- Weak vs. strong concepts
- Learning velocity
- Difficulty progression
Session Best Practices
Next Steps
Track Your Progress
View mastery levels and learning analytics
Interview Endpoints
Explore all interview endpoints
Resume Analysis
Generate personalized questions from your resume
Technical Q&A
Deep-dive into technical concept practice