The Resume Analyst agent is a specialized AI agent that processes raw resume text and extracts structured information including name, email, experience, skills, and professional summary. It runs independently on port 5006 and is discoverable on the ZyndAI network.
The agent uses a carefully designed system prompt to ensure structured JSON output:
backend/agents/resume_agent.py
prompt = ChatPromptTemplate.from_messages([ ("system", """You are an expert HR data specialist. Analyze the provided resume text and extract the following information in strict JSON format: { "name": "Full name", "email": "Email address", "experience": integer (total years of experience), "skills": ["skill1", "skill2"], "summary": "Brief 1-sentence summary" } If any field is missing, return an empty string or 0. Return ONLY the JSON."""), ("human", "{input}")])chain = prompt | llm | StrOutputParser()
The fallback experience is set to 3 years rather than 0 to provide a realistic baseline when extraction fails. This prevents candidates from being unfairly penalized due to technical issues.
When the input is a LinkedIn URL instead of raw resume text, the agent attempts to process whatever context is available:
backend/agents/resume_agent.py
# If it's a URL, we might want to "scrape" it first, # but for LinkedIn it's hard. We'll use the LLM to process # whatever text we have or just infer from the URL/Context.prompt_input = message.contentresult = chain.invoke({"input": prompt_input})
LinkedIn scraping is challenging due to authentication requirements. The agent does its best with limited data and relies on other agents (like GitHub Analyst) to fill in gaps.
if __name__ == "__main__": if not os.environ.get("ZYND_API_KEY"): print("ERROR: ZYND_API_KEY not set") sys.exit(1) print(f"FairMatch Resume Analyst Agent running at {agent.webhook_url}") try: while True: time.sleep(1) except KeyboardInterrupt: print("Shutting down...")
The agent stays alive and listens for incoming requests from the orchestrator.