Overview
The Resume Analysis feature automatically extracts key information from your resume and uses it to personalize your entire interview preparation experience. By understanding your skills, projects, and experience level, the platform generates targeted questions that match real interview scenarios.How to Upload Your Resume
Navigate to the Resume Upload Section
From your dashboard, locate the resume upload area. The system accepts common document formats (PDF, DOCX, TXT).
Upload Your Resume File
Click the upload button and select your resume file. The platform will process it immediately and extract relevant information.
Add Job Description (Optional)
For even more targeted preparation, paste the job description of a role you’re applying for. This allows the system to align questions with the specific position requirements.
What Gets Extracted
When you upload your resume, the platform performs intelligent text processing:Intelligent Chunking
- Your resume is split into 500-character chunks with 50-character overlap
- This ensures context is preserved across section boundaries
- Each chunk is stored with metadata (user ID, chunk position, source type)
Vector Embeddings
- Creates semantic embeddings using the
all-MiniLM-L6-v2model - Enables similarity-based matching between your experience and question topics
- Embeddings are normalized for accurate cosine similarity calculations
FAISS Index Creation
- Builds a personal FAISS vector index for fast retrieval
- Uses Inner Product similarity (cosine with normalized vectors)
- Stored separately per user at
data/processed/resume_faiss/resume_index_{user_id}.faiss
Your resume data is processed locally and stored securely. Each user has a separate FAISS index that’s never shared with other users.
How It Personalizes Your Experience
1. Targeted Mock Interview Questions
When generating interview questions, the system:- Searches your resume chunks for relevant experience (top 5 matches)
- References your actual projects and technologies in questions
- Matches question difficulty to your experience level
- Combines resume context with job description requirements
2. Job Description Alignment
When you provide a job description:- Creates a separate embedding for the JD
- Stored at
data/processed/resume_faiss/jd_embedding_{user_id}.npy - Questions reference both your experience AND the target role requirements
- Helps identify skill gaps between your background and the position
3. Contextual Evaluation
During answer evaluation, your resume context is used to:- Provide more relevant feedback based on your background
- Suggest improvements tied to your actual experience level
- Generate model answers that match your seniority
Example Workflow
System Processing
The platform creates 8 chunks, generates embeddings, and builds your FAISS index (384 dimensions)
Add Job Description
You paste a Senior Backend Engineer JD requiring “Python, microservices, database optimization”
Personalized Questions
Mock interview generates questions like:
- “Explain how you optimized database queries in your FastAPI projects”
- “Describe your experience with microservices. What challenges did you face?”
Behind the Scenes
Resume Processing Pipeline
- Text Splitting: Uses
RecursiveCharacterTextSplitterwith configurable chunk sizes - Embedding Generation: Each chunk gets a 384-dimensional vector
- Index Construction: FAISS IndexFlatIP for cosine similarity
- Metadata Storage: Chunks stored with IDs, positions, and user associations
Semantic Search
- Query embedding is created for the question context
- FAISS performs fast similarity search across your resume chunks
- Top 5 most relevant chunks are retrieved with similarity scores
- Results inform question generation and expected answer keywords
Managing Your Data
Updating Your Resume
- You can re-upload anytime to update your profile
- The system will recreate your FAISS index with the new information
- Previous index is overwritten automatically
Removing Resume Data
- Use the profile settings to delete your resume data
- This removes both the FAISS index and metadata files
- Job description embeddings are deleted separately if needed
Deleting resume data will make future questions less personalized, as the system won’t have context about your specific experience.
Technical Details
Storage Structure
Per user, the system maintains:resume_index_{user_id}.faiss- Vector index fileresume_metas_{user_id}.json- Chunk metadata with text and positionsjd_embedding_{user_id}.npy- Job description vector (if provided)jd_text_{user_id}.txt- Original JD text for reference
Performance
- Embedding generation: ~100-200ms per resume
- Search latency: <10ms for similarity queries
- Index size: ~4KB per 1000 words of resume text
Privacy & Security
- All resume data is stored with user-specific identifiers
- FAISS indices are isolated per user
- No cross-user data leakage possible
- You control when to upload, update, or delete your data