Overview
Nectr uses Anthropic’s Claude Sonnet 4.6, one of the most advanced AI models for code understanding, to analyze every pull request. The AI doesn’t just lint code—it understands context, architectural patterns, and team dynamics to provide intelligent, actionable feedback.Claude Sonnet 4.6 features a 200K token context window, allowing Nectr to analyze entire PR diffs, related files, historical reviews, and production context in a single request.
Review Modes
Nectr supports two AI analysis modes:Standard Mode
Single agentic loop with 8 MCP-style tools for code search, issue fetching, and error lookup. Best for most use cases.
Parallel Mode
Three specialized agents run concurrently (security, performance, style), with a synthesis agent combining insights. Enable with
PARALLEL_REVIEW_AGENTS=true.Standard Mode (Default)
A single Claude instance orchestrates the review with access to:app/services/ai_service.py
Parallel Mode
Enable with:.env
Security Agent
Focuses on:
- SQL injection vulnerabilities
- XSS and CSRF risks
- Authentication/authorization flaws
- Secret exposure
- Dependency vulnerabilities
Performance Agent
Focuses on:
- Database query optimization (N+1 queries, missing indexes)
- Memory leaks and resource exhaustion
- Inefficient algorithms (O(n²) loops)
- Caching opportunities
- Async/await misuse
Style Agent
Focuses on:
- Code readability and naming conventions
- Documentation completeness
- Type hint coverage (Python) or TypeScript strict mode
- Error handling patterns
- Test coverage gaps
Analysis Flow
Here’s how Claude analyzes a PR:Context Building
Before AI analysis, Nectr gathers:- PR Diff: Full code changes with line numbers
- File Content: Complete file context for changed files
- Mem0 Memories: Project patterns, developer habits, past decisions
- Neo4j Graph: File experts, related PRs, ownership data
- Linear Issues: Linked tasks and feature descriptions
- Sentry Errors: Production errors in modified files
- Slack Messages: Relevant team discussions
Review Verdicts
Claude returns one of three verdicts:APPROVE
APPROVE
When: No significant issues found. Code quality is high, tests pass, and changes align with project patterns.Example:
REQUEST_CHANGES
REQUEST_CHANGES
When: Critical issues require fixes before merging (security vulnerabilities, breaking changes, major bugs).Example:
COMMENT
COMMENT
When: Minor issues or suggestions that don’t block merging (style improvements, optimization opportunities, documentation gaps).Example:
Inline Suggestions
Claude provides actionable inline comments with:- File and line number: Pinpoints exact location
- Severity: Critical, High, Medium, Low
- Category: Security, Performance, Bug, Style
- Suggested fix: Code snippet showing correction
- Reasoning: Why the change matters
Tool Usage Examples
search_code
Claude uses semantic search to find similar patterns:get_file_experts
Identifies who to tag for domain-specific reviews:fetch_sentry_errors
Grounds analysis in production reality:Customizing Analysis
Control AI behavior via environment variables:.env
See the Environment Variables guide for the complete list of configuration options from
app/core/config.py.Best Practices
Seed Mem0 with Project Patterns
Seed Mem0 with Project Patterns
Before the first review, add 5-10 core patterns to Mem0:This helps Claude align reviews with your standards from day one.
Use Parallel Mode for Critical PRs
Use Parallel Mode for Critical PRs
Enable
PARALLEL_REVIEW_AGENTS=true for:- Security-sensitive changes (auth, payment, data access)
- Performance-critical paths (API endpoints, database queries)
- Large refactors (>500 lines changed)
Review AI Feedback
Review AI Feedback
Not all AI suggestions are correct. Encourage developers to:
- Push back on incorrect feedback
- Add clarifying memories when AI misunderstands patterns
- Report false positives to improve future reviews
Troubleshooting
Reviews Are Too Strict
Reviews Are Too Strict
Symptom: Every PR gets REQUEST_CHANGES, even for minor changes.Fix:
- Review Mem0 patterns—remove overly strict rules
- Add memories clarifying acceptable exceptions
- Lower
REQUIRE_TESTSorREQUIRE_TYPE_HINTSif too aggressive
Reviews Are Too Lenient
Reviews Are Too Lenient
Symptom: AI approves PRs with obvious bugs.Fix:
- Enable parallel mode for deeper analysis
- Add negative examples to Mem0 (“This pattern caused bugs in PR #123”)
- Ensure Sentry integration is configured to surface production errors
High API Costs
High API Costs
Symptom: Anthropic bill is unexpectedly high.Fix:
- Disable parallel mode (
PARALLEL_REVIEW_AGENTS=false) - Reduce
MAX_CONTEXT_FILESandMAX_RELATED_PRS - Limit reviews to specific repos or file patterns
- Cache Claude responses for repeated PRs (custom implementation)
Next Steps
Parallel Agents Guide
Learn how to configure and optimize parallel agent reviews
Semantic Memory
Understand how Mem0 improves review quality over time