Overview
Nectr automatically reviews every pull request using an AI-powered workflow that combines diff analysis, contextual intelligence from Neo4j and Mem0, and optional MCP integrations. When a PR is opened or updated, GitHub sends a webhook that triggers the review pipeline.The entire review process runs asynchronously in the background, returning an HTTP 200 immediately to avoid webhook timeouts.
Architecture Diagram
Workflow Stages
When GitHub sends a webhook, Nectr validates the HMAC-SHA256 signature using the webhook secret to ensure authenticity. The event is deduplicated (ignoring duplicate events within 1 hour) and stored in PostgreSQL with status
pending.# app/api/v1/webhooks.py
@router.post("/github")
async def handle_github_webhook(request: Request, db: AsyncSession):
# Verify HMAC-SHA256 signature
signature = request.headers.get("X-Hub-Signature-256")
if not verify_signature(payload, signature, webhook_secret):
raise HTTPException(status_code=403, detail="Invalid signature")
# Deduplicate within 1 hour
existing = await check_duplicate_event(db, delivery_id, within_hours=1)
if existing:
return {"status": "duplicate", "event_id": existing.id}
# Create Event record
event = Event(delivery_id=delivery_id, status="pending")
db.add(event)
await db.commit()
# Process in background
background_tasks.add_task(process_pr_in_background, payload, event.id)
return {"status": "accepted", "event_id": event.id}
Nectr fetches the PR diff, changed files list, and (for agentic mode) lazily fetches full file contents as Claude requests them via tools.
# app/services/pr_review_service.py:496-499
diff = await github_client.get_pr_diff(owner, repo, pr_number, token=github_token)
files = await github_client.get_pr_files(owner, repo, pr_number, token=github_token)
file_paths = [f.get("filename", "") for f in files if f.get("filename")]
author = (pr.get("user") or {}).get("login", "")
Fixes #N, Closes #N from PR body/title and fetches issue details from GitHub# app/services/pr_review_service.py:512-523
issue_details, open_pr_conflicts, candidate_issues, related_prs = await asyncio.gather(
_fetch_issue_details(owner, repo, issue_refs),
_get_open_pr_conflicts(owner, repo, pr_number, file_paths),
_find_candidate_issues(
owner, repo,
pr_title, pr_body,
file_paths,
already_referenced=set(issue_refs),
),
graph_builder.get_related_prs(repo_full_name, file_paths[:10], top_k=5),
return_exceptions=True,
)
Claude runs a single agentic loop with 8 MCP-style tools. It fetches context on-demand instead of receiving everything upfront.
read_file: Read full source code at HEAD commitsearch_project_memory: Query Mem0 for project patternssearch_developer_memory: Query Mem0 for developer patternsget_file_history: Get file experts + related PRs from Neo4jget_issue_details: Fetch GitHub issue detailssearch_open_issues: Keyword search in open issuesget_linked_issues: Query Linear/GitHub via MCPget_related_errors: Query Sentry via MCP# app/services/pr_review_service.py:566-569
review_result = await ai_service.analyze_pull_request_agentic(
pr, diff, files, tool_executor, issue_refs=issue_refs
)
for round_num in range(max_rounds):
response = await self.client.messages.create(
model=self.model,
max_tokens=4000,
tools=REVIEW_TOOLS,
messages=messages,
)
if response.stop_reason == "tool_use":
for block in response.content:
if block.type == "tool_use":
result = await tool_executor.execute(block.name, block.input)
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": result,
})
messages.append({"role": "user", "content": tool_results})
# app/services/pr_review_service.py:560-564
use_parallel = getattr(settings, 'PARALLEL_REVIEW_AGENTS', False)
if use_parallel:
review_result = await ai_service.analyze_pull_request_parallel(
pr, diff, files, tool_executor, issue_refs=issue_refs
)
# app/services/ai_service.py:770-773
security_out, performance_out, style_out = await asyncio.gather(
security_task, performance_task, style_task,
return_exceptions=True
)
# app/services/ai_service.py:787-795
return await self._synthesize_review(
pr=pr,
diff=diff,
files=files,
security_findings=security_findings,
performance_findings=performance_findings,
style_findings=style_findings,
issue_refs=issue_refs,
)
Nectr posts the review to GitHub using the GitHub REST API with your personal access token. It attempts to post as a GitHub review (with inline suggestions), falling back to a flat issue comment if HEAD SHA is unavailable.
# app/services/pr_review_service.py:716-725
try:
await github_client.post_pr_review(
owner, repo, pr_number,
commit_id=head_sha,
body=comment_body,
event=github_event, # "APPROVE" | "REQUEST_CHANGES" | "COMMENT"
comments=inline_comments,
token=github_token,
)
except Exception as review_err:
# Fallback to flat comment if review fails
await github_client.post_pr_comment(owner, repo, pr_number, comment_body, token=github_token)
Fixes #N)# app/services/pr_review_service.py:771-779
await graph_builder.index_pr(
repo_full_name=repo_full_name,
pr_number=pr_number,
title=pr_title,
author=author,
files_changed=file_paths,
verdict=review_result.verdict,
issue_numbers=all_issue_numbers,
)
Claude extracts structured memories from the completed review and stores them in Mem0 for future context.
# app/services/pr_review_service.py:782-789
await extract_and_store(
repo_full_name=repo_full_name,
pr_number=pr_number,
author=author,
title=pr_title,
files=files,
review_summary=summary,
)
project_pattern: Architectural patterns confirmeddecision: Approaches approved/rejected (includes PR#)developer_pattern: Recurring issues for this developerdeveloper_strength: Things this developer does wellrisk_module: Fragile or security-critical filescontributor_profile: Aggregated profile (topics, files, feedback, strengths)The workflow updates the Event and WorkflowRun status to
completed or failed with full result metadata.# app/services/pr_review_service.py:745-756
workflow.status = "completed"
workflow.result = json.dumps({
"ai_summary": summary,
"files_analyzed": len(files),
"comment_posted": True,
"verdict": review_result.verdict,
"inline_suggestions": len(inline_comments),
"linked_issues": [i["number"] for i in linked_issues],
"related_prs": len(related_prs),
"open_pr_conflicts": len(open_pr_conflicts),
"semantic_issue_matches": [m["number"] for m in review_result.semantic_issue_matches],
})
Configuration
Standard vs Parallel Mode
Set thePARALLEL_REVIEW_AGENTS environment variable to enable parallel specialized agents:
- Larger PRs (>500 lines) where specialized analysis helps
- Security-critical repos that need thorough security review
- High-traffic repos where you want faster turnaround (parallel = faster)
- Small to medium PRs (<500 lines)
- Repos where context depth matters more than speed
- When you want Claude to follow its own reasoning thread
MCP Integration (Optional)
Set these environment variables to enable MCP context sources:Performance
Average Review Time
Standard mode: 15-30 secondsParallel mode: 10-20 seconds
Context Sources
GitHub: PR diff, files, issuesNeo4j: File experts, related PRsMem0: Project patterns, developer habitsMCP: Linear, Sentry, Slack (optional)
Error Handling
Nectr handles failures gracefully at every stage:- GitHub API errors: Logged, workflow marked as failed, no review posted
- Neo4j unavailable: Silently skips graph queries, continues with Mem0 context
- Mem0 unavailable: Silently skips memory queries, continues with Neo4j context
- MCP integration errors: Logged, continues without that integration’s data
- Review posting failure: Attempts GitHub review first, falls back to flat comment
Related Files
app/services/pr_review_service.py— Main orchestrator (view source)app/services/ai_service.py— Claude integration + parallel agents (view source)app/integrations/github/client.py— GitHub REST API client (view source)app/api/v1/webhooks.py— Webhook receiver (view source)