Why Rev-dep is Fast
1. Built with Go
Rev-dep is written in Go, not JavaScript. This provides several performance advantages:Native Compilation
- No runtime overhead - Runs as native machine code, not interpreted JavaScript
- Direct system calls - File I/O operations are fast and efficient
- Small binary - Single executable with no dependencies
Efficient Memory Management
- Static typing - No type checking overhead at runtime
- Predictable memory layout - Structs are laid out efficiently in memory
- Optimized garbage collection - Go’s GC is tuned for high-throughput applications
Concurrent by Design
- Goroutines - Lightweight threads with minimal overhead (~2KB stack)
- CSP model - Channels enable safe communication between concurrent tasks
- Work stealing scheduler - Efficiently distributes goroutines across CPU cores
Performance Comparison
Node.js approach: Parse files sequentially → Build graph → Run checkRev-dep approach: Parse files in parallel → Build graph once → Run all checks in parallel
2. Aggressive Parallelization
Rev-dep parallelizes work at every opportunity:File Parsing Parallelization
Graph Building Parallelization
When analyzing multiple entry points:Config-Based Check Parallelization
When running multiple checks:3. Optimized Parser
Rev-dep includes a custom-built lexer optimized for speed:Single-Pass Parsing
The parser makes one pass through the file:Efficient String Skipping
Strings, comments, and template literals are skipped efficiently:Depth Tracking Optimization
When inside braces (functions, classes, etc.), only dynamic imports need to be detected:4. Minimal Allocations
Rev-dep minimizes memory allocations:Reuse Slices
Bytes Instead of Strings
Parser operates on[]byte instead of string to avoid allocations:
String Interning
File paths are deduplicated to save memory:5. Single Dependency Tree Build
When running config-based checks, Rev-dep builds the dependency tree once and reuses it:Benchmark Results
Rev-dep has been benchmarked against popular alternatives on a real-world TypeScript monorepo: Test environment:- Codebase: 6,034 source files, 518,862 lines of code
- Machine: Intel i9-14900KF @ 2.80GHz (WSL Linux Debian)
- Method:
hyperfinewith 8 runs per test, 4 warmup runs
Circular Dependency Detection
| Tool | Version | Time | vs Rev-dep |
|---|---|---|---|
| Rev-dep | 2.0.0 | 289 ms | 1x (baseline) |
| dpdm-fast | 1.0.14 | 7,061 ms | 24x slower |
| dpdm | 3.14.0 | 5,030 ms | 17x slower |
| skott | 0.35.6 | 29,575 ms | 102x slower |
| madge | 8.0.0 | 69,328 ms | 240x slower |
Unused Exports Detection
| Tool | Time | vs Rev-dep |
|---|---|---|
| Rev-dep | 303 ms | 1x |
| knip | 6,606 ms | 22x slower |
Unused Files Detection
| Tool | Time | vs Rev-dep |
|---|---|---|
| Rev-dep | 277 ms | 1x |
| knip | 6,596 ms | 23x slower |
Unused Node Modules
| Tool | Time | vs Rev-dep |
|---|---|---|
| Rev-dep | 287 ms | 1x |
| knip | 6,572 ms | 22x slower |
Missing Node Modules
| Tool | Time | vs Rev-dep |
|---|---|---|
| Rev-dep | 270 ms | 1x |
| knip | 6,568 ms | 24x slower |
List Imported Files
| Tool | Time | vs Rev-dep |
|---|---|---|
| Rev-dep | 229 ms | 1x |
| madge | 4,467 ms | 20x slower |
Discover Entry Points
| Tool | Time | vs Rev-dep |
|---|---|---|
| Rev-dep | 323 ms | 1x |
| madge | 67,000 ms | 207x slower |
Real-World Impact
On a 500k LoC codebase:Rev-dep: 500ms total for all checksAlternative tools: 30+ seconds for equivalent checksCI time saved: 30 seconds per PR × 100 PRs/day = 50 minutes/day
Scalability
Rev-dep scales efficiently with codebase size:Linear Time Complexity
Most operations are O(n) or O(n log n):- File parsing: O(n) - each file parsed once
- Graph building: O(n + e) - nodes + edges
- Circular detection: O(n + e) - DFS traversal
Memory Efficiency
Memory usage scales predictably:- ~100 bytes per file in dependency tree
- ~50 bytes per import relationship
- For 10,000 files with 50,000 imports: ~6MB memory
CPU Utilization
Rev-dep automatically uses all available CPU cores:Performance Tips
1. Use Config Files
Always prefer config-based execution:2. Use .gitignore
Rev-dep respects.gitignore files. Ensure generated files are excluded:
3. Limit Entry Points
If you have many entry points, consider filtering:4. Use Includes/Excludes
Narrow the scope of expensive checks:5. Run in CI Efficiently
Future Optimizations
Planned performance improvements:- Incremental analysis - Only re-analyze changed files
- Result caching - Cache analysis between runs
- Streaming results - Output results as they’re found
- SIMD parsing - Use CPU vector instructions for byte scanning
