The Interview Grader agent evaluates candidate interview answers for technical depth, clarity, and problem-solving skills. It uses Google’s Gemini model as the primary grading mechanism, with a deterministic fallback when the LLM is unavailable.
The temperature is set to 0.1 rather than 0 to allow slight variation in scoring while maintaining consistency. This prevents identical scores for similar-quality answers.
The agent uses a focused system prompt to ensure numeric output:
backend/agents/interview_agent.py
prompt = ChatPromptTemplate.from_messages([ ("system", "You are an expert technical interviewer. Evaluate the candidate's answers based on clarity, technical depth, and problem-solving skills. Return ONLY a single integer score between 0 and 100."), ("human", "{input}")])chain = prompt | llm | StrOutputParser()
The deterministic fallback ensures the system continues functioning even during LLM outages. While less sophisticated than LLM grading, it provides reasonable approximations based on answer length and technical vocabulary.
If a candidate hasn’t completed an interview yet, the system estimates a score based on resume length. This allows partial evaluations while interviews are being scheduled.
The orchestrator robustly extracts numeric scores from agent responses:
backend/ai_engine.py
resp_content = response.json().get('response', '50')try: # 1. Try to parse as JSON first (new format) if isinstance(resp_content, str) and resp_content.strip().startswith('{'): resp_json = json.loads(resp_content) if isinstance(resp_json, dict) and 'score' in resp_json: return int(resp_json['score']) # 2. Fallback to extracting digits return min(100, max(0, int(''.join(filter(str.isdigit, str(resp_content))))))except: return 50
Large discrepancies between interview performance and GitHub portfolio quality trigger adaptive fraud investigation.
A 30-point threshold was chosen based on real-world data. Legitimate candidates may show some variance due to interview nerves, but differences larger than 30 points often indicate profile fraud.
if __name__ == "__main__": if not os.environ.get("ZYND_API_KEY") or os.environ.get("ZYND_API_KEY") == "REPLACE_ME_WITH_ZYND_API_KEY": print("ERROR: ZYND_API_KEY is not set. Please set it in .env") sys.exit(1) print(f"FairMatch Interview Grader Agent running at {agent.webhook_url}") print(f"Price: {agent_config.price} per request") try: while True: time.sleep(1) except KeyboardInterrupt: print("Shutting down...")
Without GEMINI_API_KEY, the agent automatically switches to deterministic fallback mode. The system continues functioning with reduced accuracy but 100% uptime.
I implemented a microservices architecture using Node.js and Docker. Each service communicates via REST APIs and uses Redis for caching. I optimized database queries with indexes and implemented circuit breaker patterns for fault tolerance. The system handles 10k requests per second.
Reasoning: Detailed technical answer with specific frameworks, design patterns, and quantified results.