Overview
FairMatch AI uses a multi-agent architecture where specialized AI agents collaborate to evaluate candidates. Instead of a monolithic AI system, you get independent agents that each handle specific evaluation tasks, communicate over the ZyndAI network, and get paid for their work using the x402 micropayment protocol.Architecture components
Orchestrator agent
The FairMatch Orchestrator coordinates all evaluation tasks and delegates work to specialized agents. It runs as the central hub of the system.backend/ai_engine.py
Specialized agents
FairMatch deploys five specialized agents that handle different evaluation aspects:| Agent | Port | Purpose |
|---|---|---|
| Resume Analyst | 5006 | Extracts structured data from resumes |
| GitHub Analyst | 5001 | Verifies GitHub profiles and analyzes code quality |
| Interview Grader | 5002 | Evaluates technical interview answers |
| Integrity Analyst | 5004 | Detects fraud and profile inconsistencies |
| Decision Intelligence | 5003 | Performs bias analysis and final recommendations |
Agent discovery and communication
Finding agents on the network
The orchestrator searches for agents by keyword on the ZyndAI registry:backend/ai_engine.py
Sending messages between agents
Agents communicate using theAgentMessage protocol:
backend/ai_engine.py
The x402 micropayment protocol automatically handles payments between agents. When the orchestrator requests work from a specialized agent, the payment is processed transparently in the background.
Evaluation workflow
Here’s how the multi-agent system evaluates a candidate:1. GitHub verification
The orchestrator sends the candidate’s GitHub URL to the GitHub Analyst agent:backend/ai_engine.py
2. Interview grading
Interview answers are sent to the Interview Grader agent:backend/ai_engine.py
3. Adaptive fraud detection
If there’s a significant discrepancy between GitHub and interview scores, the Integrity Analyst agent is automatically triggered:backend/ai_engine.py
4. Decision intelligence
The Decision Intelligence agent receives all collected data and performs comprehensive analysis including bias detection:backend/ai_engine.py
5. Final scoring
The orchestrator combines all agent outputs using weighted scoring:backend/ai_engine.py
Agent response formats
Score-only responses
Simple agents like Interview Grader return a numeric score:backend/ai_engine.py
Structured JSON responses
Complex agents like Decision Intelligence return rich structured data:backend/ai_engine.py
All agent communication is resilient with fallback handling. If an agent is unavailable or returns invalid data, the system uses intelligent defaults to ensure evaluation continues.
Benefits of multi-agent architecture
Specialization
Each agent focuses on a specific task, leading to higher accuracy and better results. The GitHub Analyst agent specializes in code analysis, while the Interview Grader focuses exclusively on evaluating technical communication.Scalability
You can scale individual agents independently based on demand. If GitHub verification becomes a bottleneck, you can deploy additional GitHub Analyst agents without touching other components.Transparency
Each agent’s contribution is tracked separately, making it easy to understand how the final score was calculated and debug issues.Economic alignment
Agents are paid for their work using the x402 protocol, creating a marketplace where high-quality agents are economically rewarded.Error handling
The orchestrator implements robust fallback mechanisms:backend/ai_engine.py
Next steps
Resume analysis
Learn how the Resume Analyst extracts structured data
GitHub verification
Understand GitHub profile verification
Interview grading
See how interview answers are evaluated
Integrity checks
Explore fraud detection mechanisms