Skip to main content
Rev-dep is designed for speed. It can analyze a 500k+ line TypeScript codebase in approximately 500ms, making it 10x-200x faster than alternatives.

Why Rev-dep is Fast

1. Built with Go

Rev-dep is written in Go, not JavaScript. This provides several performance advantages:

Native Compilation

  • No runtime overhead - Runs as native machine code, not interpreted JavaScript
  • Direct system calls - File I/O operations are fast and efficient
  • Small binary - Single executable with no dependencies

Efficient Memory Management

  • Static typing - No type checking overhead at runtime
  • Predictable memory layout - Structs are laid out efficiently in memory
  • Optimized garbage collection - Go’s GC is tuned for high-throughput applications

Concurrent by Design

  • Goroutines - Lightweight threads with minimal overhead (~2KB stack)
  • CSP model - Channels enable safe communication between concurrent tasks
  • Work stealing scheduler - Efficiently distributes goroutines across CPU cores

Performance Comparison

Node.js approach: Parse files sequentially → Build graph → Run checkRev-dep approach: Parse files in parallel → Build graph once → Run all checks in parallel

2. Aggressive Parallelization

Rev-dep parallelizes work at every opportunity:

File Parsing Parallelization

// From parseImports.go:1505
func ParseImportsFromFiles(filePaths []string, ...) {
    // Create worker pool with 2x CPU cores
    maxConcurrency := runtime.GOMAXPROCS(0) * 2
    sem := make(chan struct{}, maxConcurrency)
    
    for _, filePath := range filePaths {
        wg.Add(1)
        sem <- struct{}{} // Acquire semaphore
        
        go func(path string) {
            defer func() { <-sem }() // Release
            
            // Parse this file in parallel
            fileContent, _ := os.ReadFile(path)
            imports := ParseImportsByte(fileContent, ...)
            
            // Safe concurrent append
            mu.Lock()
            results = append(results, imports)
            mu.Unlock()
        }(filePath)
    }
    
    wg.Wait()
}
On an 8-core machine, Rev-dep can parse 16 files simultaneously.

Graph Building Parallelization

When analyzing multiple entry points:
// From main.go:274
for _, entryPoint := range entryPoints {
    wg.Add(1)
    sem <- struct{}{}
    
    go func(ep string) {
        defer func() { <-sem }()
        
        // Build dependency graph for this entry point
        depsGraph := buildDepsGraphForMultiple(minimalTree, []string{ep}, ...)
        
        mu.Lock()
        allGraphs = append(allGraphs, depsGraph)
        mu.Unlock()
    }(entryPoint)
}

Config-Based Check Parallelization

When running multiple checks:
// Pseudo-code for config execution
// 1. Build dependency tree ONCE
minimalTree := GetMinimalDepsTreeForCwd(...)

// 2. Process all rules in parallel
for _, rule := range config.Rules {
    go func(rule) {
        // 3. Run all checks for this rule in parallel
        var wg sync.WaitGroup
        
        wg.Add(1)
        go checkCircularDeps(minimalTree)
        
        wg.Add(1) 
        go checkOrphanFiles(minimalTree)
        
        wg.Add(1)
        go checkUnusedExports(minimalTree)
        
        wg.Wait()
    }(rule)
}

3. Optimized Parser

Rev-dep includes a custom-built lexer optimized for speed:

Single-Pass Parsing

The parser makes one pass through the file:
// From parseImports.go:1370
for i < n {
    switch code[i] {
    case 'i':
        // Check for 'import' keyword
        if isImportKeywordStart(i) {
            module, next := parseImportStatement(i)
            imports = append(imports, module)
            i = next
            continue
        }
    case 'e':
        // Check for 'export' keyword
        if isExportKeywordStart(i) {
            module, next := parseExportStatement(i)
            imports = append(imports, module)
            i = next
            continue
        }
    }
    i++
}

Efficient String Skipping

Strings, comments, and template literals are skipped efficiently:
// From parseImports.go:299
func skipToStringEnd(code []byte, start int, quote byte) int {
    i := start + 1
    for i < len(code) {
        if code[i] == quote {
            return i
        }
        if code[i] == '\\' && i+1 < len(code) {
            i += 2  // Skip escape sequence
        } else {
            i++
        }
    }
    return i
}

Depth Tracking Optimization

When inside braces (functions, classes, etc.), only dynamic imports need to be detected:
// From parseImports.go:1374
if depth > 0 {
    // Fast path: only look for dynamic import() and require()
    switch code[i] {
    case '{':
        depth++
    case '}':
        depth--
    case 'i':
        if isImportKeywordStart(i) && code[i+6] == '(' {
            // Dynamic import
        }
    }
} else {
    // Full keyword scanning at top level
}
This optimization significantly speeds up parsing large files with many function definitions.

4. Minimal Allocations

Rev-dep minimizes memory allocations:

Reuse Slices

// Pre-allocate with capacity
imports := make([]Import, 0, 32)
path := make([]string, 0, 64)

// Reuse the same path slice during DFS
path = append(path, node)
// ...
path = path[:len(path)-1]  // Shrink instead of allocating new

Bytes Instead of Strings

Parser operates on []byte instead of string to avoid allocations:
func ParseImportsByte(code []byte, ...) []Import {
    // Work directly with bytes
    if code[i] == 'i' && code[i+1] == 'm' && code[i+2] == 'p' ...
}

String Interning

File paths are deduplicated to save memory:
visited := make(map[string]bool)
if visited[filePath] {
    return  // Reuse existing string
}

5. Single Dependency Tree Build

When running config-based checks, Rev-dep builds the dependency tree once and reuses it:
// From main.go:1442
func GetMinimalDepsTreeForCwd(cwd string, ...) MinimalDependencyTree {
    // 1. Discover all files (parallel walk)
    files := GetFiles(cwd, []string{}, excludePatterns)
    
    // 2. Parse imports (parallel parsing)
    fileImportsArr, _ := ParseImportsFromFiles(files, ...)
    
    // 3. Resolve imports (single pass)
    fileImportsArr, _, _ := ResolveImports(...)
    
    // 4. Build minimal tree (single pass)
    minimalTree := TransformToMinimalDependencyTreeCustomParser(fileImportsArr)
    
    return minimalTree  // Reused for all checks
}
Compare this to running separate commands:
# Slow: builds tree 3 times
rev-dep circular       # Build tree → detect cycles
rev-dep node-modules unused  # Build tree → check modules
rev-dep entry-points   # Build tree → find entry points

# Fast: builds tree once
rev-dep config run     # Build tree → run all checks in parallel

Benchmark Results

Rev-dep has been benchmarked against popular alternatives on a real-world TypeScript monorepo: Test environment:
  • Codebase: 6,034 source files, 518,862 lines of code
  • Machine: Intel i9-14900KF @ 2.80GHz (WSL Linux Debian)
  • Method: hyperfine with 8 runs per test, 4 warmup runs

Circular Dependency Detection

ToolVersionTimevs Rev-dep
Rev-dep2.0.0289 ms1x (baseline)
dpdm-fast1.0.147,061 ms24x slower
dpdm3.14.05,030 ms17x slower
skott0.35.629,575 ms102x slower
madge8.0.069,328 ms240x slower

Unused Exports Detection

ToolTimevs Rev-dep
Rev-dep303 ms1x
knip6,606 ms22x slower

Unused Files Detection

ToolTimevs Rev-dep
Rev-dep277 ms1x
knip6,596 ms23x slower

Unused Node Modules

ToolTimevs Rev-dep
Rev-dep287 ms1x
knip6,572 ms22x slower

Missing Node Modules

ToolTimevs Rev-dep
Rev-dep270 ms1x
knip6,568 ms24x slower

List Imported Files

ToolTimevs Rev-dep
Rev-dep229 ms1x
madge4,467 ms20x slower

Discover Entry Points

ToolTimevs Rev-dep
Rev-dep323 ms1x
madge67,000 ms207x slower

Real-World Impact

On a 500k LoC codebase:Rev-dep: 500ms total for all checksAlternative tools: 30+ seconds for equivalent checksCI time saved: 30 seconds per PR × 100 PRs/day = 50 minutes/day

Scalability

Rev-dep scales efficiently with codebase size:

Linear Time Complexity

Most operations are O(n) or O(n log n):
  • File parsing: O(n) - each file parsed once
  • Graph building: O(n + e) - nodes + edges
  • Circular detection: O(n + e) - DFS traversal

Memory Efficiency

Memory usage scales predictably:
  • ~100 bytes per file in dependency tree
  • ~50 bytes per import relationship
  • For 10,000 files with 50,000 imports: ~6MB memory

CPU Utilization

Rev-dep automatically uses all available CPU cores:
# View CPU usage during analysis
rev-dep config run &
PID=$!
top -p $PID

# You'll see 800%+ CPU usage on 8-core machines

Performance Tips

1. Use Config Files

Always prefer config-based execution:
# ❌ Slow: builds tree multiple times
rev-dep circular
rev-dep node-modules unused
rev-dep entry-points

# ✅ Fast: builds tree once
rev-dep config run

2. Use .gitignore

Rev-dep respects .gitignore files. Ensure generated files are excluded:
# .gitignore
node_modules/
dist/
build/
.next/
*.generated.ts

3. Limit Entry Points

If you have many entry points, consider filtering:
{
  "prodEntryPoints": [
    "src/main.tsx",
    "src/pages/**/*.tsx"
  ],
  "devEntryPoints": [
    "scripts/**/*.ts",
    "**/*.test.ts"
  ]
}

4. Use Includes/Excludes

Narrow the scope of expensive checks:
{
  "unusedNodeModulesDetection": {
    "enabled": true,
    "excludeModules": ["@types/**"],  // Skip type packages
    "includeModules": ["@myorg/**"]   // Only check our packages
  }
}

5. Run in CI Efficiently

# .github/workflows/checks.yml
- name: Check dependencies
  run: rev-dep config run
  
- name: Cache rev-dep results
  uses: actions/cache@v3
  with:
    path: .rev-dep-cache
    key: ${{ hashFiles('**/*.ts', '**/*.tsx') }}

Future Optimizations

Planned performance improvements:
  • Incremental analysis - Only re-analyze changed files
  • Result caching - Cache analysis between runs
  • Streaming results - Output results as they’re found
  • SIMD parsing - Use CPU vector instructions for byte scanning
For the latest performance benchmarks and optimization tips, see the GitHub repository.

Build docs developers (and LLMs) love