Skip to main content

Overview

The Resume Optimizer is an AI-powered tool that helps job seekers enhance their resumes based on specific job requirements. It uses LlamaIndex for RAG (Retrieval-Augmented Generation) with Nebius AI models to provide targeted, actionable suggestions for improving resume effectiveness.

Key Features

  • PDF Resume Processing: Upload and analyze resumes in PDF format
  • Job-Specific Optimization: Get tailored suggestions based on job title and description
  • Multiple Optimization Types: ATS keywords, experience enhancement, skills hierarchy, and more
  • Real-time Preview: View your resume while making changes
  • AI-Powered Analysis: Leverages advanced language models for intelligent suggestions

Architecture

from llama_index.core import SimpleDirectoryReader, Settings, VectorStoreIndex
from llama_index.embeddings.nebius import NebiusEmbedding
from llama_index.llms.nebius import NebiusLLM

LlamaIndex Integration

Implementation

Resume Loading

from llama_index.core import SimpleDirectoryReader
import tempfile
import shutil

def load_resume(uploaded_file):
    """Load and process PDF resume."""
    # Create temporary directory
    temp_dir = tempfile.mkdtemp()
    
    # Save uploaded PDF
    file_path = os.path.join(temp_dir, uploaded_file.name)
    with open(file_path, "wb") as f:
        f.write(uploaded_file.getbuffer())
    
    # Load with LlamaIndex
    documents = SimpleDirectoryReader(temp_dir).load_data()
    
    return documents, temp_dir

Nebius LLM and Embeddings

from llama_index.core import Settings
from llama_index.embeddings.nebius import NebiusEmbedding
from llama_index.llms.nebius import NebiusLLM
import os

# Initialize Nebius LLM
llm = NebiusLLM(
    model="Qwen/Qwen3-235B-A22B",  # Or "deepseek-ai/DeepSeek-V3"
    api_key=os.getenv("NEBIUS_API_KEY")
)

# Initialize Nebius embeddings
embed_model = NebiusEmbedding(
    model_name="BAAI/bge-en-icl",
    api_key=os.getenv("NEBIUS_API_KEY")
)

# Configure LlamaIndex settings
Settings.llm = llm
Settings.embed_model = embed_model

RAG Optimization Pipeline

from llama_index.core import VectorStoreIndex

def run_rag_completion(
    documents,
    query_text: str,
    job_title: str,
    job_description: str,
    embedding_model: str = "BAAI/bge-en-icl",
    generative_model: str = "Qwen/Qwen3-235B-A22B"
) -> str:
    """Run RAG completion for resume optimization."""
    # Configure models
    llm = NebiusLLM(
        model=generative_model,
        api_key=os.getenv("NEBIUS_API_KEY")
    )
    
    embed_model = NebiusEmbedding(
        model_name=embedding_model,
        api_key=os.getenv("NEBIUS_API_KEY")
    )
    
    Settings.llm = llm
    Settings.embed_model = embed_model
    
    # Step 1: Analyze the resume
    analysis_prompt = f"""
    Analyze this resume in detail. Focus on:
    1. Key skills and expertise
    2. Professional experience and achievements
    3. Education and certifications
    4. Notable projects or accomplishments
    5. Career progression and gaps
    
    Provide a concise analysis in bullet points.
    """
    
    index = VectorStoreIndex.from_documents(documents)
    resume_analysis = index.as_query_engine(similarity_top_k=5).query(analysis_prompt)
    
    # Step 2: Generate optimization suggestions
    optimization_prompt = f"""
    Based on the resume analysis and job requirements, provide specific, actionable improvements.
    
    Resume Analysis:
    {resume_analysis}
    
    Job Title: {job_title}
    Job Description: {job_description}
    
    Optimization Request: {query_text}
    
    Provide a direct, structured response in this exact format:

    ## Key Findings
    • [2-3 bullet points highlighting main alignment and gaps]

    ## Specific Improvements
    • [3-5 bullet points with concrete suggestions]
    • Each bullet should start with a strong action verb
    • Include specific examples where possible

    ## Action Items
    • [2-3 specific, immediate steps to take]
    • Each item should be clear and implementable

    Keep all points concise and actionable.
    """
    
    optimization_suggestions = index.as_query_engine(similarity_top_k=5).query(optimization_prompt)
    
    return str(optimization_suggestions)

Two-Stage RAG Process

  1. Resume Analysis Stage:
    • Creates vector index from resume documents
    • Queries for key skills, experience, education
    • Identifies career progression and gaps
  2. Optimization Stage:
    • Combines resume analysis with job requirements
    • Uses vector search to find relevant resume sections
    • Generates targeted improvement suggestions

Optimization Types

Available Optimizations

optimization_prompts = {
    "ATS Keyword Optimizer": 
        "Identify and optimize ATS keywords. Focus on exact matches and semantic variations from the job description.",
    
    "Experience Section Enhancer": 
        "Enhance experience section to align with job requirements. Focus on quantifiable achievements.",
    
    "Skills Hierarchy Creator": 
        "Organize skills based on job requirements. Identify gaps and development opportunities.",
    
    "Professional Summary Crafter": 
        "Create a targeted professional summary highlighting relevant experience and skills.",
    
    "Education Optimizer": 
        "Optimize education section to emphasize relevant qualifications for this position.",
    
    "Technical Skills Showcase": 
        "Organize technical skills based on job requirements. Highlight key competencies.",
    
    "Career Gap Framing": 
        "Address career gaps professionally. Focus on growth and relevant experience."
}

ATS Keywords

Optimize for Applicant Tracking Systems with exact keyword matches

Experience Enhancement

Improve work experience with quantifiable achievements

Skills Hierarchy

Organize skills based on job relevance and importance

Professional Summary

Craft compelling summaries highlighting key qualifications

Education Optimizer

Emphasize relevant educational background

Technical Skills

Showcase technical competencies aligned with job needs

Career Gap Framing

Address employment gaps professionally and positively

Streamlit Application

import streamlit as st
from PyPDF2 import PdfReader
import base64

st.set_page_config(page_title="Resume Optimizer", layout="wide")
st.title("📝 Resume Optimizer")
st.caption("Powered by Nebius AI")

# Sidebar: Configuration and resume upload
with st.sidebar:
    st.image("./Nebius.png", width=150)
    
    # Model selection
    generative_model = st.selectbox(
        "Generative Model",
        ["Qwen/Qwen3-235B-A22B", "deepseek-ai/DeepSeek-V3"],
        index=0
    )
    
    st.divider()
    
    # Resume upload
    st.subheader("Upload Resume")
    uploaded_file = st.file_uploader(
        "Choose your resume (PDF)",
        type="pdf",
        accept_multiple_files=False
    )
    
    if uploaded_file:
        # Process resume
        documents, temp_dir = load_resume(uploaded_file)
        st.session_state.docs_loaded = True
        st.session_state.documents = documents
        st.success("✓ Resume loaded successfully")
        
        # PDF Preview
        st.subheader("Resume Preview")
        base64_pdf = base64.b64encode(uploaded_file.getvalue()).decode('utf-8')
        pdf_display = f'<iframe src="data:application/pdf;base64,{base64_pdf}" width="100%" height="500" type="application/pdf"></iframe>'
        st.markdown(pdf_display, unsafe_allow_html=True)

# Main area: Job information and optimization
col1, col2 = st.columns([1, 1])

with col1:
    st.subheader("Job Information")
    job_title = st.text_input("Job Title")
    job_description = st.text_area("Job Description", height=200)
    
    st.subheader("Optimization Options")
    optimization_type = st.selectbox(
        "Select Optimization Type",
        [
            "ATS Keyword Optimizer",
            "Experience Section Enhancer",
            "Skills Hierarchy Creator",
            "Professional Summary Crafter",
            "Education Optimizer",
            "Technical Skills Showcase",
            "Career Gap Framing"
        ]
    )
    
    if st.button("Optimize Resume"):
        if not st.session_state.get('docs_loaded', False):
            st.error("Please upload your resume first")
        elif not job_title or not job_description:
            st.error("Please provide both job title and description")
        else:
            with st.spinner("Analyzing and generating suggestions..."):
                response = run_rag_completion(
                    st.session_state.documents,
                    optimization_prompts[optimization_type],
                    job_title,
                    job_description,
                    "BAAI/bge-en-icl",
                    generative_model
                )
                # Remove think tags
                response = response.replace("<think>", "").replace("</think>", "")
                st.session_state.messages.append({"role": "assistant", "content": response})

with col2:
    st.subheader("Optimization Results")
    if "messages" in st.session_state:
        for message in st.session_state.messages:
            st.markdown(message["content"])

Vector Search in Resume Analysis

How It Works

  1. Document Chunking: Resume PDF is split into semantic chunks
  2. Embedding: Each chunk is embedded using BAAI/bge-en-icl
  3. Index Creation: Chunks are indexed in a vector store
  4. Query Embedding: Questions are embedded with the same model
  5. Similarity Search: Top-k most relevant chunks are retrieved
  6. Context Augmentation: Retrieved chunks augment the LLM prompt
# This happens automatically in LlamaIndex:
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(
    similarity_top_k=5  # Retrieve top 5 most similar chunks
)
response = query_engine.query(prompt)

Installation

git clone https://github.com/Arindam200/awesome-ai-apps.git
cd rag_apps/resume_optimizer
uv sync

Environment Setup

Create a .env file:
NEBIUS_API_KEY=your_api_key_here

Running the Application

streamlit run main.py

Workflow

1

Upload Resume

Upload your resume in PDF format via the sidebar
2

Enter Job Details

Provide the target job title and full job description
3

Select Optimization

Choose the type of optimization you need
4

Generate Suggestions

Click “Optimize Resume” to get AI-powered recommendations
5

Review and Apply

Review suggestions and update your resume accordingly

Use Cases

Job Applications

Tailor your resume for specific job applications

ATS Optimization

Ensure your resume passes Applicant Tracking Systems

Career Transitions

Reframe experience for new industries or roles

Resume Review

Get objective feedback on resume effectiveness

Best Practices

1

Complete Job Descriptions

Provide full, detailed job descriptions for better analysis
2

Current Resume

Upload your most recent, complete resume
3

Multiple Optimizations

Run different optimization types for comprehensive improvements
4

Iterative Refinement

Apply suggestions, re-upload, and optimize again

Model Comparison

ModelStrengthsBest For
Qwen/Qwen3-235B-A22BComprehensive analysis, detailed suggestionsGeneral optimization
DeepSeek-V3Technical depth, code-related rolesTechnical/engineering resumes

LlamaIndex

LlamaIndex RAG framework documentation

Nebius AI

Nebius AI model provider

Build docs developers (and LLMs) love