issue_details = await asyncio.gather( *[github_client.get_issue(owner, repo, n) for n in issue_refs])
Finds open issues that might be resolved without explicit mention:
# app/services/pr_review_service.py:71-127async def _find_candidate_issues( owner: str, repo: str, pr_title: str, pr_body: str, file_paths: list[str], already_referenced: set[int], max_candidates: int = 8,) -> list[dict]: # 1. Fetch up to 50 open issues issues = await github_client.get_repo_issues(owner, repo, state="open", per_page=50) issues = [i for i in issues if i["number"] not in already_referenced] # 2. Build keyword set from PR title + body + file paths pr_text = f"{pr_title} {pr_body}".lower() file_keywords = set() for path in file_paths: parts = re.split(r"[/_.-]", path.lower()) file_keywords.update(p for p in parts if len(p) > 2) pr_words = set(re.findall(r"\b\w{3,}\b", pr_text)) | file_keywords # 3. Score each issue by keyword overlap scored = [] for issue in issues: issue_text = f"{issue.get('title', '')} {issue.get('body') or ''}".lower() issue_words = set(re.findall(r"\b\w{3,}\b", issue_text)) overlap = len(pr_words & issue_words) if overlap >= 2: # Require at least 2 shared words scored.append((overlap, issue)) # 4. Return top N by overlap score scored.sort(key=lambda x: x[0], reverse=True) return [{"number": i["number"], "title": i.get("title"), "body": i.get("body")[:200], "score": s} for s, i in scored[:max_candidates]]
Claude decides whether these candidates are actually resolved by the PR.
Checks for concurrent PRs touching the same files:
# app/services/pr_review_service.py:130-172async def _get_open_pr_conflicts( owner: str, repo: str, current_pr_number: int, current_file_paths: list[str],) -> list[dict]: # 1. Fetch up to 20 open PRs (excluding current) open_prs = await github_client.get_repo_pull_requests(owner, repo, state="open", per_page=20) open_prs = [p for p in open_prs if p["number"] != current_pr_number][:10] # 2. Fetch changed files for each PR (parallel) pr_files_results = await asyncio.gather( *[github_client.get_pr_files_list(owner, repo, p["number"]) for p in open_prs], return_exceptions=True, ) # 3. Find overlapping files current_files_set = set(current_file_paths) conflicting = [] for pr, pr_files in zip(open_prs, pr_files_results): if isinstance(pr_files, Exception): continue overlap = sorted(current_files_set & set(pr_files)) if overlap: conflicting.append({ "number": pr["number"], "title": pr.get("title", ""), "author": (pr.get("user") or {}).get("login", ""), "url": pr.get("html_url"), "overlap": overlap[:5], # Cap at 5 files for display }) return sorted(conflicting, key=lambda x: len(x["overlap"]), reverse=True)[:5]
Claude returns line_hint (code snippet) instead of absolute line numbers. Nectr resolves these to GitHub line numbers:
# app/services/pr_review_service.py:194-234def _build_line_map(files: list[dict]) -> dict[str, dict[str, int]]: """ Parse the `patch` field of each file and build: {filename: {stripped_line_content: right_side_line_number}} Indexes every `+` line (additions) so AI-generated `line_hint` strings can be resolved to absolute line numbers. """ line_map = {} for f in files: patch = f.get("patch", "") filename = f.get("filename", "") if not patch or not filename: continue mapping = {} current_right_line = 0 for patch_line in patch.splitlines(): if patch_line.startswith("@@"): # Parse hunk header: @@ -10,6 +10,7 @@ m = re.search(r"\+(\d+)", patch_line) if m: current_right_line = int(m.group(1)) - 1 elif patch_line.startswith("+"): current_right_line += 1 content = patch_line[1:] # Strip leading '+' stripped = content.strip() if stripped: # Store 3 variants for fuzzy matching: mapping[content] = current_right_line # Exact with indentation mapping[stripped] = current_right_line # Stripped mapping[" ".join(stripped.split())] = current_right_line # Whitespace-normalized elif not patch_line.startswith("-"): current_right_line += 1 line_map[filename] = mapping return line_map# Resolve line_hint -> absolute line numberdef _resolve_line(hint: str, lines: dict[str, int]) -> int | None: return ( lines.get(hint) or lines.get(hint.strip()) or lines.get(" ".join(hint.split())) # Whitespace-normalized )
Fuzzy matching handles minor whitespace differences between the AI’s hint and the actual patch content.
Nectr’s review comment follows a structured format:
Hi I am Nectr - AI code review agent built by [Dhanush Chalicheemala](https://x.com/dhanush_chali)## Verdict: **APPROVE**[High-level analysis with key points]### Key Changes:- ✅ [Positive change 1]- ✅ [Positive change 2]- ⚠️ [Minor concern 1]### Suggestions:- [Suggestion 1]- [Suggestion 2]## Resolved Issues- 🟢 Closes [#123: Issue title](https://github.com/org/repo/issues/123)## 🔍 Potentially Resolves_These open issues appear to be resolved by this PR's changes, even though they weren't explicitly mentioned:_- 🟡 [#145: Issue title](https://github.com/org/repo/issues/145) — [Reason why Claude thinks it's resolved]## ⚠️ Open PR ConflictsThese open PRs touch the same files — coordinate to avoid merge conflicts:- [PR #87](https://github.com/org/repo/pull/87): **Refactor auth middleware** by @bob — shared files: `src/auth/middleware.py`, `src/auth/jwt.py`## Related Past Work- Similar to [PR #120](https://github.com/org/repo/pull/120) [APPROVE]: Add JWT refresh token logic---*If you have any concerns, connect with [Dhanush Chalicheemala](https://x.com/dhanush_chali)*
Nectr detects issues resolved without explicit Fixes #N mentions:
# app/services/pr_review_service.py:71-127candidate_issues = await _find_candidate_issues( owner, repo, pr_title, pr_body, file_paths, already_referenced=set(issue_refs), # Exclude issues already mentioned)# Returns top 8 open issues with keyword overlap score# Claude analyzes candidates during agentic loop# Example tool call:get_issue_details([145, 167, 189]) # Fetch full details for top candidates# Claude returns semantic matches in ReviewResult:review_result.semantic_issue_matches = [ { "number": 145, "reason": "This PR fixes the root cause by adding error handling to JWT decode", "confidence": "high" # "high" | "medium" }]
Displayed in review:
## 🔍 Potentially Resolves_These open issues appear to be resolved by this PR's changes, even though they weren't explicitly mentioned:_- 🟢 [#145: Refresh token endpoint returns 500](https://github.com/org/repo/issues/145) — This PR fixes the root cause by adding error handling to JWT decode
Shows PRs that touched the same files (structural similarity):
UNWIND $paths AS pathMATCH (pr:PullRequest {repo: $repo})-[:TOUCHES]->(f:File {repo: $repo, path: path})WHERE pr.verdict IS NOT NULLWITH pr, count(DISTINCT f) AS overlapORDER BY overlap DESCLIMIT 5RETURN pr.number, pr.title, pr.author, pr.verdict, overlap
Displayed in review:
## Related Past Work- Similar to [PR #120](https://github.com/org/repo/pull/120) [APPROVE]: Add JWT refresh token logic- Similar to [PR #98](https://github.com/org/repo/pull/98) [REQUEST_CHANGES]: Implement token refresh endpoint
Cause: Line number resolution failed (AI’s line_hint didn’t match any line in the diff).Solution: This is expected for:
Comments about unchanged lines (only + additions are indexed)
Multi-file architectural suggestions (no specific line)
These appear as top-level review comments without inline placement.
AI missed an obvious bug
Cause: Insufficient context (file not read during agentic loop).Solution: Add a custom memory rule:
POST /api/v1/memory{ "repo": "org/repo", "content": "Always check for null pointer exceptions in {specific_file}", "memory_type": "project_pattern"}
Claude will search this memory on future reviews.
Semantic issue matches are incorrect
Cause: Keyword overlap produced false positives.Mitigation: Only high confidence matches are indexed in Neo4j. Medium confidence matches are displayed but not auto-closed.Claude’s judgment improves over time as the project memory grows.
Review says 'Sentry integration not configured'
This is expected if SENTRY_MCP_URL is not set. Sentry integration is optional.Set these env vars to enable: