Skip to main content

Overview

ION Career automatically scores job applicants based on their answers to screening questions. The scoring system provides a simple 0-10 scale that helps HR teams quickly identify qualified candidates.

How Scoring Works

The scoring algorithm is implemented in the process_job_questions() handler, which runs automatically after a Job Applicant is created.

Algorithm Steps

1

Parse JSON Answers

Extract the applicant’s answers from the custom_job_question_answers field:
handlers.py
answers = json.loads(doc.custom_job_question_answers)
2

Retrieve Question Set

Fetch the question set linked to the job opening:
handlers.py
job_opening = doc.job_title
qset_name = frappe.db.get_value(
    "Job Opening",
    job_opening,
    "custom_job_question_set"
)
qset = frappe.get_doc("Job Question Set", qset_name)
3

Count Positive Answers

Iterate through questions and count “Yes” answers:
handlers.py
total_answered = 0

for q in qset.questions:
    if answers.get(q.fieldname) == "Yes":
        total_answered += 1
4

Calculate Score

Apply the scoring formula to normalize to 0-10 scale:
handlers.py
score_multiplier = 10 / len(qset.questions)
doc.custom_score = score_multiplier * total_answered
5

Save Score

Persist the calculated score to the applicant record:
handlers.py
doc.save(ignore_permissions=True)

Score Calculation Formula

The scoring formula is straightforward:

Score Formula

Score = (10 / Total Questions) × Number of "Yes" Answers

Examples

Question Set: 5 screening questionsApplicant Answers:
  • Question 1: Yes ✓
  • Question 2: Yes ✓
  • Question 3: No ✗
  • Question 4: Yes ✓
  • Question 5: Yes ✓
Calculation:
score_multiplier = 10 / 5 = 2
total_answered = 4
score = 2 × 4 = 8.0
Result: Score = 8.0 / 10
Question Set: 3 screening questionsApplicant Answers:
  • Question 1: Yes ✓
  • Question 2: No ✗
  • Question 3: No ✗
Calculation:
score_multiplier = 10 / 3 ≈ 3.33
total_answered = 1
score = 3.33 × 1 ≈ 3.33
Result: Score = 3.33 / 10
Question Set: 8 screening questionsApplicant Answers: All “Yes” ✓Calculation:
score_multiplier = 10 / 8 = 1.25
total_answered = 8
score = 1.25 × 8 = 10.0
Result: Score = 10.0 / 10 (Perfect)

Score Interpretation

While the scoring system is simple, here’s a general guide for interpreting scores:

High Qualified

8.0 - 10.0Answered most or all screening questions positively. Strong match for the position.

Moderately Qualified

5.0 - 7.9Answered some screening questions positively. May be worth interviewing depending on other factors.

Less Qualified

0.0 - 4.9Answered few screening questions positively. May not meet basic requirements.
Scores should be used as one factor in the hiring decision, not the sole determining factor. Consider resumes, experience, and other qualifications.

Code Implementation

Here’s the complete scoring implementation from handlers.py:
handlers.py
import json
import frappe

@frappe.whitelist()
def process_job_questions(doc, method):
    # Exit if no answers provided
    if not doc.custom_job_question_answers:
        return

    # Parse JSON answers from web form
    answers = json.loads(doc.custom_job_question_answers)

    # Get the question set for this job opening
    job_opening = doc.job_title
    qset_name = frappe.db.get_value(
        "Job Opening",
        job_opening,
        "custom_job_question_set"
    )

    if not qset_name:
        return

    qset = frappe.get_doc("Job Question Set", qset_name)

    # Count "Yes" answers and create answer records
    total_answered = 0

    for q in qset.questions:
        # Create structured answer record
        doc.append("custom_question_answers", {
            "question": q.question,
            "fieldname": q.fieldname,
            "answer": answers.get(q.fieldname),
            "job_opening": job_opening
        })

        # Count "Yes" answers for scoring
        if answers.get(q.fieldname) == "Yes":
            total_answered += 1

    # Calculate final score (0-10 scale)
    score_multiplier = 10 / len(qset.questions)
    doc.custom_score = score_multiplier * total_answered

    # Save the applicant with score
    doc.save(ignore_permissions=True)

Validation Logic

Before scoring, the system validates that required questions are answered:
handlers.py
def validate(doc, method):
    if not doc.custom_job_question_answers:
        return

    answers = json.loads(doc.custom_job_question_answers)

    qset_name = frappe.db.get_value(
        "Job Opening",
        doc.job_opening,
        "custom_job_question_set"
    )

    qset = frappe.get_doc("Job Question Set", qset_name)

    # Check each required question has an answer
    for q in qset.questions:
        if q.required and q.fieldname not in answers:
            frappe.throw(
                f"Missing answer for required question: {q.question}"
            )
This validation runs during web form submission, preventing incomplete applications.

Scoring Characteristics

Linear Scale
The scoring is perfectly linear - each “Yes” answer contributes equally to the total score.
Question Agnostic
All questions have equal weight. There’s no distinction between more or less important questions.
Binary Evaluation
Only “Yes” answers count toward the score. “No” answers contribute nothing.
Automatic Calculation
Scores are calculated automatically when the application is submitted - no manual intervention needed.
Read-Only Display
Scores are stored in a read-only field, preventing manual manipulation.

Customization Opportunities

While the current implementation is simple, here are ways it could be extended:

Weighted Questions

Add a weight field to Job Question and multiply by weight when scoring

Different Answer Types

Support multiple choice or numeric answers with different point values

Passing Threshold

Configure minimum score required to advance in hiring process

Category Scores

Group questions into categories (e.g., “Technical”, “Experience”) with separate sub-scores
To implement custom scoring logic, modify the process_job_questions() function in handlers.py.

Build docs developers (and LLMs) love