Overview
When a PR is opened or updated, Nectr:- Receives GitHub webhook event
- Fetches PR diff + files from GitHub API
- Gathers context (issues, related PRs, memory)
- Runs agentic AI review (Claude fetches additional context on-demand)
- Posts review comment + inline suggestions
- Indexes PR in Neo4j + extracts Mem0 memories
app/services/pr_review_service.py:464
Service Entry Point
PRReviewService.process_pr_review()
File: app/services/pr_review_service.py:473
payload— GitHub webhook JSONevent— Database event record (tracking)db— Async SQLAlchemy session
Step 1: Fetch PR Data
File:app/services/pr_review_service.py:496
get_pr_diff: Returns unified diff string (truncated at 15 KB)get_pr_files: Returns list of{filename, additions, deletions, status, patch}
Step 2: Gather Context (Parallel)
File:app/services/pr_review_service.py:512
Context Components
Issue References
Function:_parse_issue_refs(pr_body, pr_title) (pr_review_service.py:34)
Fixes #123 / Closes #456 from PR body/title.
Candidate Issues (Semantic Matching)
Function:_find_candidate_issues() (pr_review_service.py:71)
- Fetches 50 open issues from GitHub
- Scores by keyword overlap with PR title/body/files
- Returns top 8 candidates for AI to semantically verify
Open PR Conflicts
Function:_get_open_pr_conflicts() (pr_review_service.py:130)
- Fetches up to 10 open PRs
- Checks for file path overlap
- Returns conflicting PRs sorted by overlap size
Related Past PRs
Function:graph_builder.get_related_prs() (Neo4j query)
- Finds PRs that touched the same files
- Returns top 5 with verdict (APPROVE/REQUEST_CHANGES)
Step 3: Agentic AI Review
File:app/services/ai_service.py:536
Tool-Based Context Fetching
Instead of sending all context upfront, Claude decides what it needs via tools: Available Tools (ai_service.py:21):
Tool Executor
Class:ReviewToolExecutor (pr_review_service.py:237)
read_file("app/auth/login.py") → executor fetches from GitHub → returns file content.
Agentic Loop
File:app/services/ai_service.py:646
- Claude receives diff + file list
- Claude calls tools (e.g.,
read_file,search_project_memory) - Tool results appended to conversation
- Claude continues until it has enough context
- Claude outputs structured review
Output Parsing
File:app/services/ai_service.py:491
Step 4: Build Line Map
File:app/services/pr_review_service.py:194
AI outputs line_hint (exact line content) → must resolve to absolute line number:
pr_review_service.py:180):
Step 5: Post Review
File:app/services/pr_review_service.py:716
post_pr_review fails (e.g., no commit ID), falls back to post_pr_comment (flat comment without inline suggestions).
Step 6: Index in Neo4j
File:app/services/pr_review_service.py:770
Repository, PullRequest, Developer, File, Issue — with relationships AUTHORED, TOUCHES, RESOLVES.
Step 7: Extract Memories
File:app/services/pr_review_service.py:781
- Project patterns (e.g., “Always validate user input in auth endpoints”)
- Developer patterns (e.g., “@alice tends to forget error handling in async functions”)
- Decisions (e.g., “Use Pydantic for all API request models”)
Error Handling
File:app/services/pr_review_service.py:797
- Sets
WorkflowRun.status = "failed" - Logs full traceback
- Returns error dict (webhook handler marks event as failed)
Skip Rules
File:app/services/pr_review_service.py:20
Performance Notes
- Parallel context fetching: 4 I/O-bound tasks run concurrently (
asyncio.gather) - Lazy file reading: Claude only reads files it explicitly requests
- Diff truncation: Diffs >15 KB are truncated to stay within token limits
- Tool cap: Max 8 agentic rounds to prevent infinite loops
- Inline suggestions: Hard capped at 5 to avoid overwhelming PRs
Next Steps
- Parallel Agents — 3-agent concurrent review mode
- Neo4j Schema — Graph database constraints
- Webhook Setup — How webhooks are installed