Skip to main content

Overview

Interview sessions are the core of the platform’s practice experience. Each session is tracked in the InterviewSession model, which records questions, answers, scores, feedback, and performance metrics.

Session Types

The platform supports multiple session types identified by the session_type field:
Session TypeDescriptionUse Case
mockGeneral mock interviewsBehavioral and situational questions
mock_resumeResume-based mock interviewsQuestions derived from your specific experience
technicalTechnical knowledge interviewsDeep-dive into technical concepts
codingData science coding challengesCoding problems and algorithm questions
hrHR screening interviewsCulture fit and soft skills
agenticAI-powered adaptive interviewsDynamic question generation based on performance
debuggingDebugging challengesCode debugging and problem-solving

Interview Session Model

Each session stores the following data:
class InterviewSession(db.Model):
    id                # Unique session identifier
    user_id           # Foreign key to User
    session_type      # Type of interview (see table above)
    questions         # JSON array of questions asked
    score             # Overall score (0-100)
    feedback          # Detailed feedback text
    duration          # Session duration in seconds
    created_at        # Session start timestamp
    completed_at      # Session completion timestamp
    speech_metrics    # JSON with research-grade speech analysis

Starting a Session

1

Choose session type

Select the type of interview you want to practice:Mock Interview (Resume-based):
POST /api/mock_interview/questions
Authorization: Bearer <your-token>
{
  "question_count": 5,
  "difficulty": "medium"
}
Requires uploaded resume. Generates questions based on your experience.Technical Interview:
POST /api/technical/questions
Authorization: Bearer <your-token>
{
  "topic": "machine_learning",
  "count": 8
}
Coding Interview:
POST /api/coding/questions
{
  "count": 3,
  "difficulty": "medium"
}
Each endpoint returns a set of questions tailored to the session type.
2

Initialize session

When you start a live interview (with voice), the system:
  1. Creates a new InterviewSession record
  2. Initializes speech tracking metrics
  3. Starts AssemblyAI streaming (or mock streamer)
  4. Begins recording session start time
WebSocket Event:
socket.emit('start_interview', {
  user_id: <your-user-id>,
  session_type: 'technical',
  topic: 'python'
});
Response:
{
  "status": "started",
  "message": "Interview started successfully",
  "use_mock": false,
  "session_id": 123
}
3

Session initialization details

The system creates session tracking objects:
  • Session Key: {user_id}_{session_type}
  • Speech Metrics: Tracks speaking time, pauses, fluency
  • Running Statistics: Research-grade metrics for analysis
  • Silence Detection: Monitors long pauses (>5 seconds)
Located in app.py:1218-1301

During the Session

Answering Questions

The interview flow depends on the session type:
Real-time Speech Processing:
  1. Speak your answer: Audio is streamed to AssemblyAI
  2. Transcription: Your speech is converted to text in real-time
  3. Metrics tracking: Speaking time, pauses, and fluency monitored
  4. Question completion: System detects when you finish answering
WebSocket Events:
// Receive transcription
socket.on('transcription', (data) => {
  console.log('Partial:', data.text);
});

// Final transcript
socket.on('final_transcription', (data) => {
  console.log('Final:', data.text);
});
The system tracks:
  • Speech start/end times
  • Silence periods
  • Total words spoken
  • Response latency

Real-time Feedback

During the session, you receive: Immediate Evaluation:
  • Semantic similarity score (0-1)
  • Keyword coverage percentage
  • Specific improvement suggestions
  • Concept mastery updates
Speech Metrics (live interviews):
  • Speaking ratio (speaking time / total time)
  • Words per minute (WPM)
  • Long pause count
  • Fluency score
  • Average response latency
Located in app.py:1000-1179

Running Statistics

The RunningStatistics class tracks:
class RunningStatistics:
    session_start           # Session start timestamp
    questions_answered      # Number of questions completed
    total_words            # Total words spoken
    speaking_time          # Cumulative speaking time
    silence_time           # Cumulative silence time
    long_pause_count       # Pauses longer than 5 seconds
    forced_silence_time    # Time waiting for next question
    response_latencies     # List of response start times
    semantic_similarities  # List of similarity scores per answer
These metrics are computed at the end for the final analysis.

Completing the Session

1

Stop the interview

End your session when you’ve completed all questions:WebSocket:
socket.emit('stop_interview', {
  user_id: <your-user-id>
});
HTTP Fallback:
POST /api/stop_interview
Authorization: Bearer <your-token>
{
  "user_id": <your-user-id>
}
2

Compute final metrics

The system performs comprehensive analysis:Research-grade Metrics:
  • Session duration (total time)
  • Effective duration (excluding forced silence)
  • Speaking vs. silence ratio
  • Words per minute (WPM)
  • Average semantic similarity
  • Fluency score (0-1)
  • Long pause analysis
Function: compute_research_metrics() in app.py:1053-1055Example Output:
{
  "session_duration": 600.0,
  "effective_duration": 580.0,
  "speaking_time": 420.0,
  "silence_time": 160.0,
  "speaking_ratio": 0.724,
  "total_words": 850,
  "wpm": 121.4,
  "long_pause_count": 3,
  "avg_response_latency": 2.3,
  "fluency_score": 0.867,
  "questions_answered": 8,
  "avg_semantic_similarity": 0.78
}
3

Save to database

The completed session is saved with:
session = InterviewSession(
    user_id=user_id,
    session_type='technical',
    questions=json.dumps(questions_list),
    score=avg_semantic_similarity * 100,
    duration=int(session_duration),
    completed_at=datetime.utcnow(),
    speech_metrics=json.dumps(metrics)
)
Located in app.py:1158-1165
4

Generate session summary

For mock interviews, request an overall summary:
POST /api/mock_interview/session_summary
Authorization: Bearer <your-token>
{
  "answers": [
    {
      "question": "...",
      "answer": "...",
      "evaluation": {...}
    }
  ]
}
Returns comprehensive feedback on:
  • Overall performance
  • Strongest areas
  • Areas for improvement
  • Specific recommendations
  • Weakest concepts to practice
The summary is saved to the session’s feedback field.

Reviewing Sessions

View Session History

Retrieve your past interview sessions:
GET /api/sessions?type=technical
Authorization: Bearer <your-token>
Response:
{
  "sessions": [
    {
      "id": 123,
      "session_type": "technical",
      "score": 78.5,
      "duration": 600,
      "created_at": "2026-03-03T14:30:00",
      "completed_at": "2026-03-03T14:40:00",
      "questions_count": 8
    }
  ]
}
Filter by:
  • type: Session type (mock, technical, coding, etc.)
  • Automatically sorted by most recent first
Located in app.py:3170-3186

Session Details

Get full details of a specific session including:
  • Complete list of questions
  • Your answers and transcripts
  • Detailed feedback
  • All speech metrics
  • Concept mastery updates

Performance Analytics

Session data feeds into: UserMastery Updates:
  • Mastery level per topic
  • Weak vs. strong concepts
  • Learning velocity
  • Difficulty progression
See Performance Tracking for detailed analytics.

Session Best Practices

For best results:
  • Upload your resume before starting resume-based sessions
  • Ensure quiet environment for voice interviews
  • Answer thoughtfully rather than rushing
  • Review feedback immediately after completing
  • Practice weak areas identified in session summaries
Sessions must have at least one answered question to generate valid metrics. The system validates with metrics.get('questions_answered', 0) > 0 before analysis.

Next Steps

Track Your Progress

View mastery levels and learning analytics

Interview Endpoints

Explore all interview endpoints

Resume Analysis

Generate personalized questions from your resume

Technical Q&A

Deep-dive into technical concept practice

Build docs developers (and LLMs) love