Skip to main content

Overview

Candilyzer is an advanced candidate analyzer that performs strict, expert-level technical evaluations based on GitHub activity and LinkedIn profiles. It uses multiple specialized agents to analyze code quality, professional experience, and job fit.

Multi-Source Analysis

GitHub repositories and LinkedIn profiles

Strict Scoring

Professional-grade scoring out of 100

Dual Modes

Single candidate or multi-candidate comparison

AI Reasoning

ReasoningTools and ThinkingTools for decisions

Architecture Pattern

This application demonstrates the Single Agent with Multiple Tools pattern, where one agent is equipped with diverse tools for comprehensive analysis.

Agent Configuration

from agno.agent import Agent
from agno.models.nebius import Nebius
from agno.tools.github import GithubTools
from agno.tools.exa import ExaTools
from agno.tools.thinking import ThinkingTools
from agno.tools.reasoning import ReasoningTools

# Multi-Candidate Analyzer Agent
agent = Agent(
    description="Strict candidate evaluator for technical hiring",
    instructions="""
        Evaluate GitHub candidates with zero assumptions:
        1. Analyze code quality, commit frequency, and project structure
        2. Assess skills shown through actual repositories
        3. Match with required job role skills
        4. Provide strict scoring (0-100) with justification
        5. No fluff - only data-backed assessments
    """,
    model=Nebius(
        id="deepseek-ai/DeepSeek-R1",
        api_key=api_key
    ),
    name="StrictCandidateEvaluator",
    tools=[
        ThinkingTools(
            think=True,
            instructions="Strict GitHub candidate evaluation"
        ),
        GithubTools(access_token=github_api_key),
        ExaTools(
            api_key=exa_api_key,
            include_domains=["github.com"],
            type="keyword"
        ),
        ReasoningTools(add_instructions=True)
    ],
    markdown=True,
    show_tool_calls=True
)

Key Multi-Tool Patterns

1. Thinking Before Acting

ThinkingTools(
    think=True,  # Agent thinks through evaluation strategy first
    instructions="Strict GitHub candidate evaluation"
)
The agent plans its analysis approach before executing tool calls.

2. GitHub Analysis

GithubTools(access_token=github_api_key)

# Agent can:
# - Fetch user repositories
# - Analyze commit history
# - Review code quality
# - Check contribution patterns
# - Examine project complexity

3. LinkedIn Data Extraction

ExaTools(
    api_key=exa_api_key,
    include_domains=["linkedin.com", "github.com"],
    type="keyword",
    text_length_limit=2000,
    show_results=True
)

# Extracts:
# - Job titles and descriptions
# - Professional experience
# - Skills and endorsements
# - Career progression

4. Reasoned Decision Making

ReasoningTools(add_instructions=True)

# Provides:
# - Step-by-step reasoning
# - Evidence-based conclusions
# - Clear justifications for scores
# - Transparent evaluation process

Two Analysis Modes

Compare multiple candidates for the same role:
# User provides:
# - Multiple GitHub usernames (one per line)
# - Target job role

usernames = [u.strip() for u in github_usernames.split("\n")]

query = f"""Evaluate GitHub candidates for role '{job_role}': 
            {', '.join(usernames)}"""

stream = agent.run(query, stream=True)

# Agent performs comparative analysis:
# 1. Fetches each candidate's GitHub data
# 2. Analyzes code quality and skills
# 3. Scores each candidate (0-100)
# 4. Provides ranked recommendations
Output includes:
  • Side-by-side comparison
  • Individual scores and strengths
  • Ranking with justifications
  • Hire/No-hire recommendations

Evaluation Logic

The agent follows strict evaluation rules:
evaluation_criteria = """
    GitHub Analysis:
    - Repository quality and structure (20 points)
    - Commit frequency and consistency (15 points)
    - Code complexity and patterns (20 points)
    - Technology stack relevance (15 points)
    
    LinkedIn Analysis:
    - Relevant job experience (15 points)
    - Skills match with role (10 points)
    - Career progression (5 points)
    
    Strict Rules:
    - No assumptions about missing data
    - Zero points for unavailable information
    - Evidence required for every score
    - Clear justification for each assessment
"""

Streamlit Implementation

Dynamic Agent Creation

import streamlit as st

# Session state for API keys
for key in ["Nebius_api_key", "model_id", "github_api_key", "exa_api_key"]:
    if key not in st.session_state:
        st.session_state[key] = ""

# Create agent with user-provided credentials
if st.button("Analyze Candidates"):
    if not all([st.session_state.Nebius_api_key, 
                st.session_state.github_api_key, 
                st.session_state.exa_api_key]):
        st.error("Please enter all API keys")
    else:
        agent = Agent(
            model=Nebius(
                id=st.session_state.model_id,
                api_key=st.session_state.Nebius_api_key
            ),
            tools=[...],  # As shown above
        )

Streaming Results

with st.spinner("Running analysis..."):
    output = ""
    block = st.empty()
    
    for chunk in agent.run(query, stream=True):
        if hasattr(chunk, "content") and isinstance(chunk.content, str):
            output += chunk.content
            block.markdown(output, unsafe_allow_html=True)

# Extract and display score
import re
match = re.search(r"\b([1-9]?\d|100)/100\b", output)
if match:
    score = int(match.group(1))
    st.success(f"Candidate Score: {score}/100")

YAML Prompt Management

import yaml

@st.cache_data
def load_yaml(file_path):
    with open(file_path, "r", encoding="utf-8") as file:
        return yaml.safe_load(file)

data = load_yaml("hiring_prompts.yaml")

# Load role-specific prompts
description_multi = data.get("description_for_multi_candidates")
instructions_multi = data.get("instructions_for_multi_candidates")
description_single = data.get("description_for_single_candidate")
instructions_single = data.get("instructions_for_single_candidate")

# Use in agent configuration
agent = Agent(
    description=description_single,
    instructions=instructions_single,
    # ... other config
)

Advanced Features

Zero-Assumption Analysis

instructions = """
    STRICT RULES:
    - If no experience found, say "No experience found"
    - If information unavailable, say "Not available"
    - Do NOT make up or assume any information
    - Evidence required for every statement
    - Only analyze what is explicitly present
"""

Tool Call Control

instructions = """
    DISCLAIMER: This agent should call the tool to get information.
    Once the tool is called and returns data, it should NOT call 
    the tool multiple times. Use the returned information to 
    complete the analysis.
"""

Configuration

Required API Keys

# .env or Streamlit sidebar
NEBIUS_API_KEY=your_nebius_key
GITHUB_API_KEY=your_github_token
EXA_API_KEY=your_exa_key

Model Selection

# For reasoning-heavy tasks
model = Nebius(id="deepseek-ai/DeepSeek-R1")

# For general analysis
model = Nebius(id="meta-llama/Llama-3.3-70B-Instruct")

Use Cases

Technical Screening

Filter candidates based on actual code quality and GitHub activity

Candidate Comparison

Compare multiple candidates side-by-side for hiring decisions

Skill Verification

Verify claimed skills against actual repository evidence

Experience Assessment

Evaluate professional background and career progression

Project Structure

candidate_analyser/
├── main.py                 # Streamlit app with agent logic
├── hiring_prompts.yaml     # Role-specific prompts and instructions
├── requirements.txt        # Dependencies
└── README.md              # Documentation

Job Finder Agent

Multi-agent job matching with LinkedIn analysis

Deep Researcher

Multi-stage research workflow

Learn More

Agno Framework

Learn about Agno tools and workflows

Multi-Agent Patterns

Best practices for multi-agent systems

Advanced Agents

More advanced agent examples

Build docs developers (and LLMs) love