Overview
The resume endpoints handle document upload, text extraction, FAISS indexing, and semantic search for personalized interview preparation.
All endpoints require JWT authentication via Authorization: Bearer <token> header.
Upload Resume
curl -X POST https://api.yourapp.com/api/upload_resume \
-H "Authorization: Bearer <token>" \
-F "resume=@/path/to/resume.pdf" \
-F "job_description=Senior Backend Engineer with Python and AWS experience"
Upload and process a resume file with optional job description for gap analysis.
Resume file (PDF or DOCX, max 16MB)
Job description for semantic matching and gap analysis
PDF (.pdf) - Extracted using PyPDF2
DOCX (.docx) - Extracted using python-docx
Upload and processing status
Extracted resume data Extracted skills (uppercase, cleaned)
Years of experience detected
Projects with name and tech stack
Whether FAISS indexing completed
Number of text chunks created for semantic search
Job fit analysis (if job_description provided) Keyword match percentage (0-100)
Semantic similarity score (0-1)
Skills that match job description
Skills required but not in resume
Whether experience meets requirements
Success Response
Error Response
{
"success" : true ,
"message" : "Resume uploaded and processed successfully" ,
"parsed_data" : {
"skills" : [ "PYTHON" , "FLASK" , "AWS" , "DOCKER" , "SQL" ],
"experience_years" : 3 ,
"projects" : [
{
"name" : "E-commerce Platform" ,
"tech_stack" : [ "PYTHON" , "DJANGO" , "POSTGRESQL" ]
}
],
"internships" : [ "Software Engineering Intern at TechCorp" ],
"certifications" : [ "AWS Certified Developer" ]
},
"faiss_indexed" : true ,
"chunks_created" : 12 ,
"gap_analysis" : {
"match_percentage" : 78.5 ,
"semantic_similarity" : 0.842 ,
"matching_skills" : [ "PYTHON" , "AWS" , "SQL" ],
"missing_skills" : [ "KUBERNETES" , "MICROSERVICES" ],
"gap_severity" : "Low" ,
"experience_fit" : "Good fit"
}
}
Maximum file size is 16MB. Files must be PDF or DOCX format.
Resume Gap Analysis
curl -X POST https://api.yourapp.com/api/resume/gap-analysis \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"job_description": "Senior Python Developer with 5+ years experience in ML and cloud platforms"
}'
Analyze resume fit against a job description using keyword and semantic matching.
Job description text to analyze against
Detailed gap analysis results
{
"success" : true ,
"gap_analysis" : {
"match_percentage" : 65.2 ,
"semantic_similarity" : 0.731 ,
"matching_skills" : [ "PYTHON" , "MACHINE LEARNING" , "AWS" ],
"missing_skills" : [ "TENSORFLOW" , "KUBERNETES" , "MLOPS" ],
"gap_severity" : "Medium" ,
"experience_required" : 5 ,
"experience_fit" : "May need more experience" ,
"jd_skills_found" : [ "Python" , "Machine Learning" , "AWS" , "TensorFlow" , "Kubernetes" ],
"section_gaps" : {
"technical" : [ "TENSORFLOW" , "KUBERNETES" , "MLOPS" ],
"experience" : [],
"certifications" : []
}
}
}
Get Resume Analysis
curl -X GET https://api.yourapp.com/api/resume/analysis \
-H "Authorization: Bearer <token>"
Retrieve previously parsed resume data.
Parsed resume information (same structure as upload response)
{
"success" : true ,
"resume_data" : {
"skills" : [ "PYTHON" , "JAVASCRIPT" , "REACT" ],
"experience_years" : 4 ,
"projects" : [ ... ],
"internships" : [ ... ],
"certifications" : [ ... ]
}
}
Get Resume Chunks
curl -X GET https://api.yourapp.com/api/resume/chunks \
-H "Authorization: Bearer <token>"
Retrieve all FAISS-indexed resume text chunks.
{
"success" : true ,
"chunks" : [
{
"id" : "resume_chunk_123_0" ,
"chunk_id" : 0 ,
"user_id" : 123 ,
"text" : "Senior Software Engineer with 5 years of experience in Python and cloud platforms..." ,
"chunk_size" : 245
},
{
"id" : "resume_chunk_123_1" ,
"chunk_id" : 1 ,
"user_id" : 123 ,
"text" : "Skills: Python, Flask, AWS, Docker, PostgreSQL, React, Node.js..." ,
"chunk_size" : 187
}
]
}
Resume-Based Questions
curl -X POST https://api.yourapp.com/api/resume_based_questions \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"job_description": "Full Stack Developer position",
"question_count": 5,
"variation_seed": "focus_projects"
}'
Generate interview questions tailored to resume content using FAISS semantic search.
Job description for context
Number of questions to generate
Seed for question variation
Number of relevant resume chunks used
{
"success" : true ,
"questions" : [
{
"question" : "Tell me about your e-commerce platform project and the technical challenges you faced" ,
"type" : "project-based"
},
{
"question" : "How have you used Docker in production environments?" ,
"type" : "technical"
}
],
"resume_chunks_found" : 5
}
FAISS Processing Details
Text Chunking Strategy
Chunk Size : 500 characters
Overlap : 50 characters
Separators : \n\n, \n, space
Embedding Model : all-MiniLM-L6-v2 (SentenceTransformer)
Index Configuration
Index Type : FAISS IndexFlatIP (Inner Product for cosine similarity)
Normalization : L2-normalized embeddings
Dimension : 384 (model output)
Storage Locations
data/processed/resume_faiss/
├── resume_index_{user_id}.faiss # FAISS index
├── resume_metas_{user_id}.json # Metadata
├── jd_embedding_{user_id}.npy # Job description embedding
└── jd_text_{user_id}.txt # Job description text
Error Responses
Status Code Description 400Bad request (invalid file type, missing parameters) 401Unauthorized (invalid/expired token) 413File too large (>16MB) 500Internal server error (processing failed)
{
"success" : false ,
"message" : "Failed to extract text from PDF"
}