Claude receives only the PR diff + file list, then calls tools to fetch exactly the context it needs.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ AGENTIC REVIEW LOOP ββ ββ 1. Claude receives: PR diff + file list ββ 2. Claude calls tool (e.g., read_file, search_memory) ββ 3. Tool executor fetches data (GitHub, Neo4j, Mem0, MCP) ββ 4. Result returned to Claude ββ 5. Claude calls another tool OR writes final review ββ 6. Loop continues (max 8 rounds) ββ ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Description: Read the complete source code of a file at the PRβs head commitUse case: When the diff alone doesnβt show enough context (e.g., the full class a method belongs to, imports, or a callee)Input:
**Implementation** (from `pr_review_service.py:291-300`):```pythonasync def _read_file(self, path: str) -> str: content = await github_client.get_file_content( self.owner, self.repo, path, self.head_sha ) if not content: return f"File not found or empty: {path}" if len(content) > 8000: content = content[:8000] + "\n# ... (truncated at 8 000 chars)" ext = path.rsplit(".", 1)[-1].lower() if "." in path else "" return f"### {path}\n```{ext}\n{content}\n```"
search_project_memory
Description: Search the projectβs accumulated knowledge for patterns, past decisions, and known risksUse case: When the diff touches an area you want to cross-check against historical contextInput:
{"query": "rate limiting strategy"}
Output:
- Uses Redis-backed rate limiter with sliding window (60 req/min per user)- Rate limit errors return 429 with Retry-After header
async def _search_project_memory(self, query: str) -> str: results = await memory_adapter.search_relevant( repo=self.repo_full_name, query=query, developer=None, top_k=8 ) if not results: return "No relevant project memories found." lines = [f"- {m.get('memory', m.get('content', ''))}" for m in results] return "\n".join(lines)
search_developer_memory
Description: Search what Nectr has learned about a specific developer β their recurring patterns, known strengths, and past issuesUse case: When the PR author is known and you want to tailor feedbackInput:
async def _search_developer_memory(self, developer: str, query: str) -> str: results = await memory_adapter.search_relevant( repo=self.repo_full_name, query=query, developer=developer, top_k=5 ) if not results: return f"No memories found for @{developer}." lines = [f"- {m.get('memory', m.get('content', ''))}" for m in results] return f"@{developer} memory:\n" + "\n".join(lines)
get_file_history
Description: Get (1) which developers have the most commits touching these files, and (2) past PRs that modified the same filesUse case: To spot patterns like βthis file keeps getting bug-fixedβInput:
async def _get_file_history(self, paths: list[str]) -> str: experts, related = await asyncio.gather( graph_builder.get_file_experts(self.repo_full_name, paths[:10], top_k=5), graph_builder.get_related_prs(self.repo_full_name, paths[:10], top_k=5), return_exceptions=True, ) lines: list[str] = [] if isinstance(experts, list) and experts: lines.append("File experts (most commits on these files):") for e in experts: lines.append(f" @{e['login']} β {e['touch_count']} PRs") if isinstance(related, list) and related: lines.append("Related past PRs:") for p in related: lines.append( f" PR #{p['number']} [{p.get('verdict', '?')}] " f"by @{p.get('author', '?')}: {p.get('title', '')}" ) return "\n".join(lines) if lines else "No history found for these files."
get_issue_details
Description: Fetch title, state, and description of specific GitHub issues (e.g., those mentioned in the PR body with βFixes #Nβ)Input:
{"numbers": [42, 38]}
Output:
Issue #42 [open]: Login redirect breaks on mobile Users report that after login, mobile browsers redirect to...Issue #38 [closed]: Token expiry not handled When JWT token expires, API returns 500 instead of 401
async def _get_issue_details(self, numbers: list[int]) -> str: results = await asyncio.gather( *[github_client.get_issue(self.owner, self.repo, n) for n in numbers[:5]], return_exceptions=True, ) lines: list[str] = [] for n, r in zip(numbers, results): if isinstance(r, Exception) or r is None: lines.append(f"Issue #{n}: could not fetch") else: body_preview = (r.get("body") or "")[:200].replace("\n", " ") lines.append( f"Issue #{n} [{r.get('state', '?')}]: {r.get('title', '')}\n {body_preview}" ) return "\n".join(lines) if lines else "No issues found."
search_open_issues
Description: Search open GitHub issues to find ones this PR might resolve even without an explicit βFixes #Nβ mentionUse case: When you want to find semantic matches (issues resolved by behavior, not explicit reference)Input:
{"keywords": "login redirect mobile"}
Output:
Issue #42: Login redirect breaks on mobileIssue #39: Mobile Safari login flow broken
async def _search_open_issues(self, keywords: str) -> str: kw_set = set(re.findall(r"\b\w{3,}\b", keywords.lower())) matches: list[str] = [] for issue in self.candidate_issues: text = f"{issue.get('title') or ''} {issue.get('body') or ''}".lower() if len(kw_set & set(re.findall(r"\b\w{3,}\b", text))) >= 2: matches.append(f"Issue #{issue['number']}: {issue.get('title', '')}") if not matches: return "No matching open issues found." return "\n".join(matches[:8])
get_linked_issues
Description: Fetch Linear or GitHub issues linked to this PRβs feature area via MCPUse case: When you want to understand what user problem the PR is solvingInput:
{"query": "rate limiting", "source": "linear"}
Output:
Linked linear issues for 'rate limiting': #ENG-42 [in progress]: Implement rate limiting for public API #ENG-38 [done]: Add Redis cache for rate limit counters
async def _get_linked_issues(self, query: str, source: str = "github") -> str: from app.mcp.client import mcp_client try: if source == "linear": issues = await mcp_client.get_linear_issues(team_id="", query=query) else: issues = await mcp_client.get_github_issues( repo=self.repo_full_name, query=query ) if not issues: return f"No {source} issues found for query: {query!r}" lines = [f"Linked {source} issues for {query!r}:"] for issue in issues[:10]: number = issue.get("number") or issue.get("id", "?") title = issue.get("title", "(no title)") state = issue.get("state", "") state_tag = f" [{state}]" if state else "" body = (issue.get("description") or issue.get("body") or "")[:120].replace("\n", " ") desc = f" β {body}" if body else "" lines.append(f" #{number}{state_tag}: {title}{desc}") return "\n".join(lines) except Exception as exc: logger.warning("_get_linked_issues failed: %s", exc) return f"Could not fetch {source} issues: {exc}"
get_related_errors
Description: Fetch recent Sentry production errors for files modified in this PR via MCPUse case: To check whether the PR might be fixing (or inadvertently introducing) a known errorInput:
{"files": ["app/auth/token_service.py"]}
Output:
Related Sentry errors for modified files: [42x] JWTDecodeError: Invalid token signature β culprit: app.auth.token_service.decode_token (last seen: 2024-03-10) [12x] KeyError: 'exp' in JWT payload β culprit: app.auth.token_service.validate_expiry (last seen: 2024-03-09)
SECURITY_AGENT_PROMPT = """You are a specialized security code reviewer. Focus EXCLUSIVELY on security issues:- Injection vulnerabilities (SQL, command, path traversal, SSRF)- Authentication and authorization flaws- Secrets/credentials accidentally committed- Insecure dependencies or imports- Input validation gaps at trust boundaries- Cryptographic weaknesses- Sensitive data exposure (PII in logs, unencrypted storage)For each issue found: severity (CRITICAL/HIGH/MEDIUM/LOW), file:line, what the risk is, concrete fix.If no security issues: say "No security issues found" β do NOT invent issues.Be terse. Output JSON-serializable structured findings."""
[ { "severity": "HIGH", "file": "app/auth/token_service.py", "line": 42, "issue": "JWT tokens validated without signature verification β attacker can forge tokens", "fix": "Use jwt.decode(..., verify_signature=True) and pass secret key" }]
Performance Agent
System Prompt (from ai_service.py:195-207):
PERFORMANCE_AGENT_PROMPT = """You are a specialized performance code reviewer.Focus EXCLUSIVELY on performance issues:- N+1 database queries (loop + individual queries)- Missing indexes or inefficient query patterns- Unbounded loops or O(nΒ²)+ algorithms where O(n log n) is feasible- Memory leaks (unclosed resources, unbounded caches, circular refs)- Blocking I/O in async contexts- Unnecessary serialization/deserialization in hot paths- Large payload transfers that could be paginated/streamedFor each issue found: impact (HIGH/MEDIUM/LOW), file:line, what the bottleneck is, concrete fix.If no performance issues: say "No performance issues found" β do NOT invent issues.Be terse. Output JSON-serializable structured findings."""
[ { "impact": "HIGH", "file": "app/services/user_service.py", "line": 78, "issue": "N+1 query: loops over users and fetches posts individually β 100 users = 101 queries", "fix": "Use selectinload(User.posts) in initial query to eager-load" }]
Style Agent
System Prompt (from ai_service.py:209-221):
STYLE_AGENT_PROMPT = """You are a specialized code quality reviewer.Focus EXCLUSIVELY on code quality, tests, and maintainability:- Missing or inadequate test coverage for new logic- Functions/methods that are too complex (>20 lines, deep nesting)- Unclear variable/function naming that hinders readability- Missing error handling for operations that can fail- Dead code, unused imports, or leftover debug statements- API contract breakages (changed signatures, removed fields)- Missing or outdated docstrings on public interfacesFor each issue: severity (HIGH/MEDIUM/LOW), file:line, what the issue is, concrete fix.If no style/quality issues: say "No style issues found" β do NOT invent issues.Be terse. Output JSON-serializable structured findings."""
[ { "severity": "MEDIUM", "file": "app/services/payment_service.py", "line": 123, "issue": "No test coverage for stripe_webhook failure path", "fix": "Add test case mocking Stripe API error to ensure graceful degradation" }]
Synthesis Agent
System Prompt (from ai_service.py:867-898):
synthesis_prompt = f"""You are the final synthesizer for a parallel code review system.Three specialized agents have analyzed PR #{pr.get('number')} β "{pr_title}" by @{author}.SECURITY AGENT FINDINGS:{security_findings}PERFORMANCE AGENT FINDINGS:{performance_findings}STYLE/QUALITY AGENT FINDINGS:{style_findings}Based on ALL findings above, produce a final unified code review. Respond in this EXACT JSON format:{{ "verdict": "approved" | "changes_requested" | "comment", "summary": "2-3 sentence overall assessment", "security_issues": [], "performance_issues": [], "style_issues": [], "inline_comments": [ {{"path": "file/path.py", "line": 42, "body": "specific comment"}} ], "memory_insights": "patterns worth remembering about this author/codebase"}}Rules:- verdict = "changes_requested" if ANY critical/high security or performance issue- verdict = "approved" if only low/medium style issues or no issues- verdict = "comment" if borderline (medium issues only)- Deduplicate if multiple agents flagged the same issue- inline_comments: max 8, most impactful only- Be constructive and specific"""
Output: Unified PRReviewResult matching standard modeβs format
## Summary<2-3 sentences: what does this PR do and why?>## Key Changes<3-5 bullets: `filename` β one-line description>## Issues- π΄ **Critical:** <will cause failure, data loss, or security vulnerability>- π‘ **Moderate:** <will cause problems under specific, concrete conditions>- π’ **Minor:** <clearly actionable style or efficiency issue>If no real issues exist: No issues found β **Confidence: X/5** β how confident you are this PR is safe to merge## Important Files Changed| File | Change ||------|--------|<one row per file>## Review Verdict**APPROVE**, **REQUEST_CHANGES**, or **NEEDS_DISCUSSION** β one-line reason.