The Integrity Analyst agent is FairMatch’s fraud detection system. It’s only spawned when the orchestrator detects suspicious discrepancies between a candidate’s GitHub score and interview performance. This adaptive approach saves costs while ensuring thorough investigation of anomalous profiles.
Score discrepancy > 30 points: Large gap between GitHub and interview scores
GitHub link present: Only investigate if there’s a GitHub profile to analyze
Automatic spawn: No manual intervention required
Adaptive spawning means the Integrity Analyst isn’t consulted for every evaluation. This reduces costs and latency while ensuring thorough investigation when red flags appear.
The agent uses Pydantic models to ensure structured JSON responses:
backend/agents/integrity_agent.py
class IntegrityOutput(BaseModel): fraud_probability: int = Field(description="0-100 probability that the profile is faked/purchased.") investigation_notes: str = Field(description="Detailed reasoning explaining the discrepancy.") penalty_score: int = Field(description="0-100 penalty score to apply to the candidate's final evaluation (higher is worse).")parser = JsonOutputParser(pydantic_object=IntegrityOutput)
The agent receives detailed context about the anomaly:
backend/agents/integrity_agent.py
system_prompt = """You are a specialized Fraud Detective Agent for the FairMatch platform.The primary Orchestrator agent has flagged this candidate due to a massive mismatch between their verified GitHub score and their technical Interview score.Your job is to analyze the provided data and determine if the candidate likely purchased their GitHub account, or if they simply choked during the interview. Output ONLY a JSON response format.{format_instructions}"""prompt = ChatPromptTemplate.from_messages([ ("system", system_prompt), ("human", "Analyze the following candidate discrepancy:\n\n{input}")])chain = prompt | llm | parser
The prompt frames two possibilities:
Profile fraud: Candidate purchased a GitHub account
Interview anxiety: Legitimate candidate performed poorly due to nerves
The prompt explicitly asks the agent to distinguish between fraud and legitimate underperformance. This nuance prevents false positives that could unfairly penalize nervous but qualified candidates.
When the LLM is unavailable, the agent applies a conservative fraud assumption:
backend/agents/integrity_agent.py
fallback = { "fraud_probability": 85, "investigation_notes": "High discrepancy detected between validated GitHub commits and interview depth. Anomalous pattern suggests the candidate may not have authored the linked repositories.", "penalty_score": 60}
The fallback is intentionally conservative (85% fraud probability, 60-point penalty) because the agent is only triggered for extreme discrepancies. If there’s a 30+ point gap and the LLM can’t analyze it, it’s safer to assume fraud than to allow a potentially purchased profile through.
Penalties are applied after weighted scoring, ensuring they have maximum impact. A candidate with an 85/100 base score and a 60-point penalty drops to 25/100, effectively removing them from consideration.
Moderate GitHub score (50-70) with consistent activity
Low interview score due to communication issues, not knowledge gaps
Resume and GitHub align well
Example penalty: 10-20 points (reduced penalty for legitimate candidates)
The Integrity Analyst is trained to distinguish between fraud and interview anxiety. Candidates with consistent profiles but poor interviews receive reduced penalties, while those with clear evidence of purchased accounts face severe score reductions.
if __name__ == "__main__": if not os.environ.get("ZYND_API_KEY") or os.environ.get("ZYND_API_KEY") == "REPLACE_ME_WITH_ZYND_API_KEY": print("ERROR: ZYND_API_KEY is not set. Please set it in .env") sys.exit(1) print(f"FairMatch Integrity Analyst Agent running at {agent.webhook_url}") print(f"Price: {agent_config.price} per request") try: while True: time.sleep(1) except KeyboardInterrupt: print("Shutting down...")
The adaptive spawning strategy significantly reduces costs:
Scenario
Integrity Agent Called?
Cost Impact
Normal candidate (score difference less than 30)
No
$0
Suspicious candidate (score difference greater than 30)
Yes
LLM call + x402 payment
Percentage of candidates investigated
~5-10%
90-95% cost savings
In production, only 5-10% of candidates trigger fraud investigation. This means the Integrity Analyst provides comprehensive fraud detection while adding minimal cost to the evaluation pipeline.