Overview
Mock Interviews simulate real technical interview experiences with AI-generated questions tailored to your resume and target job description. Get personalized questions, real-time evaluation, and comprehensive feedback to improve your interview performance.Starting an Interview Session
Select Interview Type
Choose from technical, behavioral, project-based, or mixed interview formats. Each format focuses on different aspects of the interview process.
Configure Session Parameters
- Number of Questions: Typically 5-10 questions per session
- Difficulty Mix: System automatically balances easy, medium, and hard questions
- Focus Areas: Optional targeting (e.g., system design, coding, behavioral)
Start Recording
Click “Start Interview” to begin. For audio sessions, grant microphone permissions for real-time speech-to-text.
Answer Questions
Respond to each question either via text input or voice recording. Take your time to provide thoughtful, structured answers.
Question Generation
Personalized Question Creation
The platform uses advanced AI (Mistral Large) to generate questions specifically for you:What Makes Questions Personalized
Question Generation Inputs:- Resume Context: Your top skills, projects, and experience (up to 2,500 chars)
- Skills List: Technologies extracted from your resume (up to 25 skills)
- Experience Level: Years of experience to calibrate difficulty
- Job Description: Target role requirements (up to 2,500 chars)
- Variation Seed: Ensures unique questions across multiple sessions
Question Types
Each question is categorized by type:-
Technical: Deep-dive into technologies and concepts
- Example: “Explain your approach to database query optimization in your current stack”
-
Behavioral: STAR method scenarios and soft skills
- Example: “Tell me about a time you had to resolve a critical production issue under pressure”
-
Project-Based: Deep dives into your actual work
- Example: “Walk me through the architecture decisions you made in your [specific project]”
-
Situational: Hypothetical problem-solving
- Example: “How would you handle a disagreement with a senior engineer about technical approach?”
-
System Design: Large-scale architecture challenges
- Example: “Design a URL shortening service handling 1 billion requests/day”
Difficulty Progression
Questions follow a strategic difficulty curve:- First 1/3: Easy questions to build confidence
- Middle 1/3: Medium difficulty for core assessment
- Final 1/3: Hard questions to test advanced knowledge
The system ensures at least 4-6 expected keywords per question, which are specific technical/conceptual terms that strong answers should include.
Answer Evaluation
Scoring Dimensions
Every answer is evaluated across four dimensions:| Dimension | Weight | What It Measures |
|---|---|---|
| Relevance & Completeness | 0-30 pts | Does it directly answer the question? |
| Technical Accuracy | 0-40 pts | Are concepts correct and detailed? |
| Structure & Clarity | 0-20 pts | Is it well-organized (e.g., STAR method)? |
| Keyword Coverage | 0-10 pts | Which expected concepts were mentioned? |
Grading Scale
- A (85-100): Excellent - Interview-ready answer
- B (70-84): Good - Minor improvements needed
- C (55-69): Average - Significant gaps to address
- D (40-54): Below Average - Major rework required
- F (<40): Poor - Fundamental concepts missing
Detailed Feedback
For each answer, you receive:1. Strengths
Specific things you said well, with direct quotes from your answer:- “You correctly identified the use of mutex locks and explained the difference from semaphores”
- “Strong use of quantified metrics: ‘40% reduction in query time‘“
2. Improvements
Specific missing concepts with explanations:- “Missing mention of database indexing strategies. You should have discussed B-tree vs Hash indexes”
- “Answer lacked the ‘Result’ component of STAR. Always quantify outcomes”
3. Model Answer
A complete 200-250 word ideal answer covering all key concepts:- Written as if by an expert candidate
- Includes concrete examples and metrics
- Demonstrates proper structure (STAR for behavioral, requirements→design→trade-offs for system design)
4. Keyword Analysis
- Covered Keywords: Concepts you successfully mentioned
- Missing Keywords: Expected terms you should have included
- Helps identify knowledge gaps
Session Summary
After completing all questions, receive comprehensive session analytics:Aggregate Metrics
- Average Score: Overall performance across all questions
- Overall Grade: A-F grade based on average score
- Completion Rate: Percentage of questions answered
- Grade Distribution: Count of A, B, C, D, F responses
Performance by Question Type
AI-Powered Insights
The system generates personalized narrative feedback: 1. Performance Narrative (3-4 sentences)- References specific question types and scores
- Honest assessment: “Your behavioral questions scored 85% but system design answers averaged only 45%, indicating strong soft skills but gaps in architectural thinking.”
- Not generic advice - tied to YOUR answers
- Example: “Lacks depth in distributed systems concepts like CAP theorem and eventual consistency”
- Specific resources and approaches
- Example: “Study database normalization forms (1NF-BCNF) using SQL practice on LeetCode”
- Customized to YOUR weaknesses
- Example: “When discussing projects, always include quantified business impact (% improvement, $ saved, users affected)”
- Clear yes/no with the biggest blocker
- Example: “Not yet ready - system design skills need 2-3 months focused practice before senior-level interviews”
Real-Time Audio Streaming
Voice Response Feature
For a more realistic experience, use voice responses:Record Your Answer
Click the microphone icon and speak naturally. The system uses speech-to-text to transcribe your response in real-time
Voice responses help simulate real interview pressure and improve your verbal communication skills. Practice speaking clearly and concisely.
Best Practices
For Technical Questions
- State the core concept first, then elaborate
- Use specific terminology from the expected keywords
- Include trade-offs and alternative approaches
- Reference real-world experience when possible
For Behavioral Questions
- Always use STAR method: Situation, Task, Action, Result
- Include quantified outcomes: percentages, dollar amounts, user counts
- Be specific about YOUR role (not team’s role)
- Keep it concise: 2-3 minutes max
For System Design Questions
- Follow structure: Requirements → Design → Trade-offs → Scaling
- Start with clarifying questions
- Draw mental diagrams (or use whiteboard feature)
- Discuss failure handling and monitoring
For Project-Based Questions
- Reference actual projects from your resume
- Explain technical decisions and their business impact
- Discuss challenges faced and how you overcame them
- Mention technologies used and why they were chosen
Example Workflow
Question 1 (Easy, Behavioral)
“Tell me about your most challenging project…”
- Your answer: 180 words, mentions specific technologies and 40% performance improvement
- Score: 88/100 (Grade A)
- Strength: “Excellent use of STAR with quantified results”
- Improvement: “Could elaborate more on trade-offs considered”
Question 5 (Hard, System Design)
“Design a URL shortening service…”
- Your answer: 95 words, mentions database but lacks caching/scaling
- Score: 52/100 (Grade C)
- Missing keywords: “load balancer”, “caching”, “sharding”, “rate limiting”
- Model answer provided: Complete solution with all components
Technical Details
AI Model
- Uses Mistral Large via API for question generation and evaluation
- Prompts engineered for FAANG-level interview standards
- Fallback questions available if API fails
Evaluation Strictness
- 5-word answers score ≤15/100
- No generic feedback - all strengths must quote specific things you said
- Model answers are standalone (200-250 words, all concepts covered)
Session Persistence
- All questions, answers, and evaluations stored
- Review past sessions anytime
- Track improvement over time