Why Performance Matters
Roslyn analyzers run:- In real-time while developers type in Visual Studio, VS Code, and Rider
- During compilation on every build
- In CI/CD pipelines where performance impacts build times
- Cause IDE lag and typing delays
- Slow down build times
- Increase memory consumption
- Frustrate developers and reduce adoption
If you modify logic in
src/CommentSense.Analyzers/Logic, you must run the performance benchmark suite to ensure no regressions.Running Benchmarks
CommentSense includes a comprehensive benchmark suite using BenchmarkDotNet.Basic Usage
Run all benchmarks:Running Specific Benchmarks
Run specific benchmark classes:Always run benchmarks in Release mode (
-c Release). Debug builds will produce misleading results.Benchmark Suite
Thebenchmarks/ directory contains several benchmark classes:
AnalyzerBenchmarks
Core analyzer performance tests that measure the analysis of various code patterns:- Documentation validation
- Parameter analysis
- Exception detection
- Quality checks
DogfoodBenchmarks
Real-world performance tests that analyze the CommentSense codebase itself:- Validates performance on actual production code
- Ensures the analyzer can efficiently analyze complex projects
- Provides realistic performance metrics
ParallelBenchmarks
Tests analyzer behavior under parallel compilation:- Multiple files analyzed simultaneously
- Thread safety validation
- Scalability testing
PathologicalBenchmarks
Stress tests with extreme edge cases:- Very large files
- Deeply nested structures
- Extensive documentation
- Complex generic signatures
LeakBenchmarks
Memory leak detection:- Validates that analyzers don’t retain unnecessary references
- Ensures proper cleanup of resources
- Monitors memory allocation patterns
Benchmark Structure
All benchmarks inherit fromBenchmarkBase:
[MemoryDiagnoser] attribute ensures that memory allocations are tracked.
Performance Best Practices
Avoid Unnecessary Allocations
Minimize LINQ Usage in Hot Paths
Use ImmutableArray Efficiently
Register Syntax Node Actions Carefully
Cache Symbol Information
Interpreting Benchmark Results
BenchmarkDotNet provides detailed metrics:Mean Execution Time
The average time to complete the benchmark. Look for:- < 1ms - Excellent for simple operations
- 1-10ms - Good for complex analysis
- > 10ms - May need optimization for large projects
Memory Allocations
Total memory allocated during execution:- < 1 KB - Excellent
- 1-10 KB - Acceptable for complex scenarios
- > 100 KB - Investigate and optimize
Gen 0/1/2 Collections
Garbage collection pressure:- Gen 0 - Short-lived objects (common)
- Gen 1 - Medium-lived objects (should be minimal)
- Gen 2 - Long-lived objects (should be zero for analyzers)
Gen 2 collections indicate memory retention issues. Analyzers should not hold long-lived references.
Avoiding Regressions
Before submitting a pull request:Creating a Baseline
Performance Testing Checklist
Before submitting changes that modify analyzer logic:- Run all relevant benchmarks in Release mode
- Compare results with baseline from
mainbranch - Ensure no significant performance regressions (>10%)
- Verify memory allocations haven’t increased substantially
- Check that Gen 2 collections remain at zero
- Run dogfooding benchmarks to test real-world impact
- Document any performance improvements in PR
Continuous Monitoring
The CommentSense project maintains performance metrics over time:- CI benchmarks run on significant changes
- Release benchmarks validate each version
- Regression tracking monitors trends across commits
Performance is a feature. Treat performance regressions as bugs and prioritize fixing them.