Skip to main content

Overview

The Resume Analysis feature uses AI to evaluate uploaded resumes across multiple dimensions, providing detailed scoring, strengths identification, and actionable improvement suggestions. The system supports multiple file formats and processes analysis asynchronously using Redis Streams for optimal performance.
All resume analysis operations are processed asynchronously to ensure fast response times and handle large files efficiently.

Supported File Formats

The system accepts the following resume formats:

PDF Documents

Native PDF parsing with text extraction support

Word Documents

DOCX and DOC format support

Text Files

Plain text (TXT) and Markdown (MD) files

Max File Size

Up to 10MB per resume file

Upload and Analysis Workflow

The resume analysis follows an asynchronous processing pattern for reliability and scalability:
1

Upload Resume

Users upload a resume file through the web interface. The system validates:
  • File size (max 10MB)
  • File type (PDF, DOCX, DOC, TXT, MD)
  • File content integrity
2

Duplicate Detection

The system calculates a SHA-256 hash of the file content to detect duplicates:
// ResumeEntity.java:23
@Column(nullable = false, unique = true, length = 64)
private String fileHash;
If an identical resume was previously uploaded, the system returns the existing analysis immediately.
3

Text Extraction

Apache Tika parses the resume and extracts text content:
// ResumeUploadService.java:65
String resumeText = parseService.parseResume(file);
Scanned PDFs without text layers cannot be parsed. Ensure your PDF contains selectable text.
4

Storage

The original file is uploaded to S3-compatible storage (RustFS or MinIO):
// ResumeUploadService.java:71
String fileKey = storageService.uploadResume(file);
String fileUrl = storageService.getFileUrl(fileKey);
5

Database Persistence

Resume metadata is saved with status PENDING:
// ResumeEntity.java:58-61
@Enumerated(EnumType.STRING)
private AsyncTaskStatus analyzeStatus = AsyncTaskStatus.PENDING;
6

Async Analysis Queue

An analysis task is sent to Redis Stream:
// ResumeUploadService.java:79
analyzeStreamProducer.sendAnalyzeTask(savedResume.getId(), resumeText);
The API returns immediately with status PENDING.
7

AI Analysis Processing

A background consumer picks up the task and:
  • Updates status to PROCESSING
  • Sends resume text to AI model (Alibaba Cloud DashScope)
  • Receives structured analysis response
  • Updates status to COMPLETED or FAILED

Status Flow

The analysis progresses through the following states:
  • PENDING: Resume uploaded, awaiting analysis worker
  • PROCESSING: AI model is analyzing the resume
  • COMPLETED: Analysis finished successfully
  • FAILED: Analysis encountered an error (automatic retry up to 3 times)

Analysis Dimensions

The AI evaluates resumes across five key dimensions, with a total score out of 100:

Content Completeness

0-25 pointsEvaluates completeness of work experience, education, projects, and personal information.

Structure Clarity

0-20 pointsAssesses logical organization, section hierarchy, and readability.

Skill Matching

0-25 pointsAnalyzes relevance of technical skills, certifications, and domain expertise.

Professional Expression

0-15 pointsReviews language quality, grammar, conciseness, and professionalism.

Project Experience

0-15 pointsEvaluates project descriptions, technical depth, and measurable achievements.

Analysis Entity Structure

// ResumeAnalysisEntity.java:24-45
private Integer overallScore;      // Total score (0-100)
private Integer contentScore;      // Content completeness (0-25)
private Integer structureScore;    // Structure clarity (0-20)
private Integer skillMatchScore;   // Skill matching (0-25)
private Integer expressionScore;   // Professional expression (0-15)
private Integer projectScore;      // Project experience (0-15)
private String summary;            // AI-generated summary
private String strengthsJson;      // List of strengths (JSON)
private String suggestionsJson;    // Improvement suggestions (JSON)

Viewing Analysis Results

Resume List

Access all uploaded resumes via the Resume History page:
GET /api/resumes
Returns:
  • Resume ID and filename
  • Upload timestamp
  • Current analysis status
  • Overall score (when completed)

Resume Detail

View comprehensive analysis for a specific resume:
GET /api/resumes/{id}/detail
{
  "id": 123,
  "filename": "john_doe_resume.pdf",
  "uploadedAt": "2026-03-10T14:30:00",
  "analyzeStatus": "COMPLETED",
  "analysis": {
    "overallScore": 82,
    "contentScore": 22,
    "structureScore": 18,
    "skillMatchScore": 23,
    "expressionScore": 12,
    "projectScore": 14,
    "summary": "Strong technical background...",
    "strengths": [
      "Solid Java and Spring Boot experience",
      "Clear project descriptions with metrics"
    ],
    "suggestions": [
      "Add more quantifiable achievements",
      "Include certification details"
    ]
  }
}

PDF Export

Generate a professional PDF report of the analysis:
GET /api/resumes/{id}/export
The system uses iText 8 to generate structured PDF reports:
// ResumeController.java:74-88
@GetMapping("/api/resumes/{id}/export")
public ResponseEntity<byte[]> exportAnalysisPdf(@PathVariable Long id) {
    var result = historyService.exportAnalysisPdf(id);
    String filename = URLEncoder.encode(result.filename(), StandardCharsets.UTF_8);
    
    return ResponseEntity.ok()
        .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename*=UTF-8''" + filename)
        .contentType(MediaType.APPLICATION_PDF)
        .body(result.pdfBytes());
}
The PDF includes:
  • Cover page with resume metadata
  • Score summary with visual charts
  • Detailed analysis for each dimension
  • Strengths and suggestions sections
  • Chinese font support (ZhuqueFangsong-Regular.ttf)

Manual Reanalysis

If analysis fails, users can manually trigger a retry:
POST /api/resumes/{id}/reanalyze
This endpoint is rate-limited to 2 requests per IP to prevent abuse.
The reanalysis process:
  1. Resets status to PENDING
  2. Clears previous error message
  3. Retrieves cached resume text (or re-downloads from storage)
  4. Sends new analysis task to Redis Stream

Deleting Resumes

Remove a resume and all associated analysis data:
DELETE /api/resumes/{id}
This operation:
  • Deletes the resume entity from the database
  • Removes all analysis records
  • Does not delete the file from object storage (for audit trail)

Rate Limiting

The upload endpoint is protected by rate limiting:
// ResumeController.java:43
@RateLimit(dimensions = {RateLimit.Dimension.GLOBAL, RateLimit.Dimension.IP}, count = 5)
Users are limited to 5 resume uploads per time window (both globally and per IP address).

Error Handling

Common error scenarios:
Error: 无法从文件中提取文本内容Causes:
  • Scanned PDF without text layer
  • Corrupted or password-protected file
  • Unsupported file encoding
Solution: Convert to a text-based format or use OCR preprocessing.
Error: Analysis status stuck at FAILEDCauses:
  • AI API key invalid or expired
  • AI model timeout or rate limit
  • Resume text too long (exceeds token limit)
Solution: Check analyzeError field for details and use manual reanalysis.
Error: 文件大小超过限制Limit: 10MBSolution: Compress or optimize the resume file.

Best Practices

Optimize File Size

Keep resumes under 5MB for faster processing. Use PDF compression tools if needed.

Use Text-Based PDFs

Avoid scanned images. Ensure PDF text is selectable before upload.

Monitor Status

Implement polling (every 3-5 seconds) to check analysis status until completion.

Handle Failures Gracefully

Display error messages clearly and provide a retry button for failed analyses.

Architecture Diagram

For complete API reference, see:

Build docs developers (and LLMs) love