Skip to main content

Overview

FairMatch AI uses a multi-agent architecture where specialized AI agents collaborate to evaluate candidates. Instead of a monolithic AI system, you get independent agents that each handle specific evaluation tasks, communicate over the ZyndAI network, and get paid for their work using the x402 micropayment protocol.

Architecture components

Orchestrator agent

The FairMatch Orchestrator coordinates all evaluation tasks and delegates work to specialized agents. It runs as the central hub of the system.
backend/ai_engine.py
agent_config = AgentConfig(
    name="FairMatch Orchestrator",
    description="Orchestrator for FairMatch AI evaluations. Delegates tasks to specialized agents.",
    capabilities={"ai": ["orchestration"], "protocols": ["http"]},
    webhook_host="0.0.0.0",
    webhook_port=5000,
    registry_url="https://registry.zynd.ai",
    api_key=os.environ.get("ZYND_API_KEY", ""),
    config_dir=".agent-orchestrator"
)

orchestrator = ZyndAIAgent(agent_config=agent_config)
The orchestrator discovers specialized agents on the ZyndAI registry and routes evaluation tasks to them based on their capabilities.

Specialized agents

FairMatch deploys five specialized agents that handle different evaluation aspects:
AgentPortPurpose
Resume Analyst5006Extracts structured data from resumes
GitHub Analyst5001Verifies GitHub profiles and analyzes code quality
Interview Grader5002Evaluates technical interview answers
Integrity Analyst5004Detects fraud and profile inconsistencies
Decision Intelligence5003Performs bias analysis and final recommendations

Agent discovery and communication

Finding agents on the network

The orchestrator searches for agents by keyword on the ZyndAI registry:
backend/ai_engine.py
target_keyword = "FairMatch GitHub Analyst"
agents = orchestrator.search_agents_by_keyword(target_keyword)

if agents:
    target = agents[0]
    sync_url = str(target.get('httpWebhookUrl', '')).replace('/webhook', '/webhook/sync')

Sending messages between agents

Agents communicate using the AgentMessage protocol:
backend/ai_engine.py
msg = AgentMessage(
    content=query_content,
    sender_id=orchestrator.agent_id,
    message_type="query",
    sender_did=orchestrator.identity_credential
)

# Use x402 processor for automatic payment
response = orchestrator.x402_processor.post(sync_url, json=msg.to_dict(), timeout=60)
The x402 micropayment protocol automatically handles payments between agents. When the orchestrator requests work from a specialized agent, the payment is processed transparently in the background.

Evaluation workflow

Here’s how the multi-agent system evaluates a candidate:

1. GitHub verification

The orchestrator sends the candidate’s GitHub URL to the GitHub Analyst agent:
backend/ai_engine.py
github_score = 0
if candidate.github_link:
    github_score = get_agent_score("github", candidate.github_link)

2. Interview grading

Interview answers are sent to the Interview Grader agent:
backend/ai_engine.py
interview_score = 0
if candidate.interview_answers:
    combined_text = "Answers: " + " | ".join(candidate.interview_answers)
    interview_score = get_agent_score("interview", combined_text)

3. Adaptive fraud detection

If there’s a significant discrepancy between GitHub and interview scores, the Integrity Analyst agent is automatically triggered:
backend/ai_engine.py
if abs(github_score - interview_score) > 30 and candidate.github_link:
    print(f"FRAUD DETECTED: GitHub({github_score}) vs Interview({interview_score}). Spawning Integrity Agent...")
    fraud_query = f"GitHub Score: {github_score}\nInterview Score: {interview_score}\nResume Info:\n{candidate.resume_text}"
    integrity_json = get_integrity_intelligence(fraud_query)
    integrity_penalty = integrity_json.get("penalty_score", 0)

4. Decision intelligence

The Decision Intelligence agent receives all collected data and performs comprehensive analysis including bias detection:
backend/ai_engine.py
decision_query = f"""
Job Description:
Title: {job.title}
Required Skills: {', '.join(job.required_skills)}

Candidate Data:
GitHub Data (Score): {github_score} out of 100
Interview Data (Score): {interview_score} out of 100
Resume/Projects Text:
{candidate.resume_text}
"""

decision_json = get_decision_intelligence(decision_query)

5. Final scoring

The orchestrator combines all agent outputs using weighted scoring:
backend/ai_engine.py
total_weight = job.weight_skill + job.weight_github + job.weight_interview + job.weight_experience + job.weight_integrity

final_score = (
    skill_score * (job.weight_skill / total_weight) +
    github_score * (job.weight_github / total_weight) +
    interview_score * (job.weight_interview / total_weight) +
    experience_score * (job.weight_experience / total_weight) +
    integrity_score * (job.weight_integrity / total_weight)
)

if integrity_penalty > 0:
    final_score = max(0, final_score - integrity_penalty)

Agent response formats

Score-only responses

Simple agents like Interview Grader return a numeric score:
backend/ai_engine.py
resp_content = response.json().get('response', '50')
try:
    # Try to parse as JSON first
    if resp_content.strip().startswith('{'):
        resp_json = json.loads(resp_content)
        if 'score' in resp_json:
            return int(resp_json['score'])
    
    # Fallback to extracting digits
    return min(100, max(0, int(''.join(filter(str.isdigit, str(resp_content))))))
except:
    return 50

Structured JSON responses

Complex agents like Decision Intelligence return rich structured data:
backend/ai_engine.py
resp_str = response.json().get('response', '{}')
if resp_str.startswith("```json"):
    resp_str = resp_str[7:-3]
return json.loads(resp_str)
All agent communication is resilient with fallback handling. If an agent is unavailable or returns invalid data, the system uses intelligent defaults to ensure evaluation continues.

Benefits of multi-agent architecture

Specialization

Each agent focuses on a specific task, leading to higher accuracy and better results. The GitHub Analyst agent specializes in code analysis, while the Interview Grader focuses exclusively on evaluating technical communication.

Scalability

You can scale individual agents independently based on demand. If GitHub verification becomes a bottleneck, you can deploy additional GitHub Analyst agents without touching other components.

Transparency

Each agent’s contribution is tracked separately, making it easy to understand how the final score was calculated and debug issues.

Economic alignment

Agents are paid for their work using the x402 protocol, creating a marketplace where high-quality agents are economically rewarded.

Error handling

The orchestrator implements robust fallback mechanisms:
backend/ai_engine.py
def get_agent_score(agent_type: str, query_content: str) -> int:
    if not orchestrator:
        print("Orchestrator not initialized. Returning fallback score.")
        return 50

    try:
        agents = orchestrator.search_agents_by_keyword(target_keyword)
        
        if not agents:
            print(f"No {agent_type} agent found on Zynd registry. Returning fallback score.")
            return 50
        
        # ... agent communication ...
        
    except Exception as e:
        print(f"Exception querying zynd network for {agent_type}: {e}")
        return 50

Next steps

Resume analysis

Learn how the Resume Analyst extracts structured data

GitHub verification

Understand GitHub profile verification

Interview grading

See how interview answers are evaluated

Integrity checks

Explore fraud detection mechanisms

Build docs developers (and LLMs) love