Oxc’s performance comes from a combination of Rust’s zero-cost abstractions, arena memory allocation, hand-tuned algorithms, and parallel processing.
Performance Targets
| Component | Target | Status |
|---|---|---|
| Parser | 10-50x faster than existing parsers | ✅ Achieved |
| Linter | 50-100x faster than ESLint | ✅ Achieved |
| Transformer | 10-20x faster than Babel | ✅ Achieved |
| Minifier | Competitive with terser/esbuild | 🚧 In Progress |
| Formatter | Competitive with Prettier | 🚧 In Progress |
Parser Performance
The parser is the foundation of all Oxc tools. Parsing is often the bottleneck in JavaScript toolchains, so optimizing it has massive downstream impact.Implementation Strategy
Arena Allocation
AST allocated in memory arena for fast allocation and deallocation
Inlined Strings
Short strings inlined using CompactString
Minimal Allocations
No heap allocations except arena and strings
Deferred Work
Scope binding and symbols delegated to semantic analyzer
Key Optimizations
1. Arena Memory Allocation
Problem: Traditional parsers allocate each AST node individually on the heap. Solution: Oxc allocates all AST nodes in a single memory arena usingoxc_allocator.
- Faster allocation: Bump pointer allocation is ~10x faster than individual heap allocations
- Faster deallocation: Single arena drop vs thousands of individual deallocations
- Better cache locality: Nodes allocated sequentially improve CPU cache hits
- No fragmentation: Arena grows contiguously
Rc/Arc-based allocation
2. String Optimization with CompactString
Problem: JavaScript identifiers and small strings are allocated frequently. Solution: UseCompactString for inline storage.
- Zero allocations for short identifiers (most common case)
- Cache-friendly: String data stored directly in AST node
- Transparent: Automatically upgrades to heap for long strings
3. Efficient Span Representation
Problem: Source positions need to be tracked for every node. Solution: Useu32 offsets instead of usize.
- 50% smaller span representation
- Better cache usage: Smaller nodes fit more per cache line
- Faster copying: Less data to copy
4. Deferred Semantic Analysis
Problem: Many parsers try to do too much during parsing. Solution: Parser only builds AST. Symbol resolution and scope binding are delegated tooxc_semantic.
What Parser Does NOT Do:
- ❌ Build symbol tables
- ❌ Resolve identifier references
- ❌ Check certain syntax errors (e.g., duplicate parameters)
- ❌ Build scope chains
- Faster parsing: Simpler, more focused code
- Better separation: Clear responsibility boundaries
- Parallelizable: Semantic analysis can be done separately
5. Hand-Written Recursive Descent
Decision: Hand-written parser instead of parser generator Benefits:- Better error messages: Custom error handling for each production
- Faster execution: No indirection through parser tables
- Easier optimization: Manual control over hot paths
- Faster compilation: No large generated code
Parser Benchmarks
Performance comparison on real-world codebases:All benchmarks run on the same machine with the same files. Times are averages over multiple runs.
Parse Speed Comparison
| Parser | Speed (files/sec) | Relative Speed |
|---|---|---|
| Oxc | 1,200-1,500 | 1x (baseline) |
| swc | 800-1,000 | 0.7x |
| esbuild | 600-800 | 0.5x |
| @babel/parser | 80-120 | 0.08x |
| TypeScript | 40-60 | 0.04x |
Oxc is 15-30x faster than Babel and 20-40x faster than TypeScript’s parser.
Memory Usage
Parsing 10,000 files:| Parser | Peak Memory | Allocations |
|---|---|---|
| Oxc | 1.2 GB | 10,000 (one per file) |
| Babel | 3.5 GB | 8,500,000+ |
| TypeScript | 4.2 GB | 12,000,000+ |
Linter Performance
The linter is Oxc’s flagship application, designed for maximum performance on large codebases.Implementation Strategy
Oxc Parser
Use the fastest parser available
Linear Memory Scan
AST visit is linear scan through arena memory
Multi-threaded
Files linted in parallel across CPU cores
Tuned Rules
Every rule optimized for performance
Key Optimizations
1. Parallel File Processing
Strategy: Process multiple files simultaneously using thread pools Implementation:- Each thread gets its own arena allocator
- No shared state during linting (except read-only config)
- Perfect scaling with CPU core count
- 1 core: baseline
- 4 cores: 3.8x faster
- 8 cores: 7.2x faster
- 16 cores: 13.5x faster
2. Linear Memory Scanning
Problem: Random memory access is slow due to cache misses. Solution: AST traversal is sequential scan through arena memory.- Predictable memory access patterns
- High cache hit rate: Next node likely in cache
- No pointer chasing: All nodes in single allocation
3. Optimized Rule Implementation
Every lint rule is tuned for performance: Pattern Matching Optimization:4. Selective Semantic Analysis
Not all rules need full semantic analysis:| Analysis Level | Rules | Example |
|---|---|---|
| Syntax Only | ~30% | no-debugger, no-console |
| Scopes | ~40% | no-unused-vars, no-shadow |
| Full Semantic | ~30% | no-undef, type-aware rules |
Linter Benchmarks
Real-World Performance: VSCode Repository
Linting the VSCode repository (4,800+ files):| Linter | Time | Files/sec | Relative Speed |
|---|---|---|---|
| oxlint | 0.7s | ~6,850 | 1x (baseline) |
| Quick Lint JS | 1.2s | ~4,000 | 0.6x |
| ESLint (with cache) | 43s | ~112 | 0.016x |
| ESLint (no cache) | 87s | ~55 | 0.008x |
oxlint is 60-120x faster than ESLint on large codebases.
Performance by Core Count
Linting 10,000 files with different core counts: Near-perfect scaling: Each additional core provides proportional speedup.Memory Efficiency
Linting 10,000 files:| Linter | Peak Memory | Memory/File |
|---|---|---|
| oxlint | 2.1 GB | 210 KB |
| ESLint | 8.5 GB | 850 KB |
Memory Management
Memory management is critical to Oxc’s performance. The arena allocator eliminates most allocation overhead.Arena Allocation Strategy
Allocation Patterns
During Parsing
- Traditional parser: 5,000-10,000 individual heap allocations
- Oxc parser: 1 arena allocation + ~50 string allocations
Memory Reuse
When processing many files:- Amortize allocation cost across files
- Reduce memory churn
- Better memory locality
Memory Layout Optimization
Cache-Friendly Structures
AST nodes are designed for cache efficiency:- Modern CPUs use 64-byte cache lines
- Oxc structures sized to maximize cache line utilization
- Sequential allocation means adjacent nodes often in same cache line
Measured Cache Performance
Cache miss rates (parsing 1000 files):| Implementation | L1 Cache Misses | L2 Cache Misses |
|---|---|---|
| Oxc (arena) | 2.3% | 0.8% |
| Traditional (Rc) | 8.7% | 3.2% |
Optimization Techniques
1. Hot Path Optimization
Identify and optimize frequently executed code:- Force inlining with
#[inline] - Avoid branches in hot loops
- Use CPU-friendly patterns (sequential access)
2. Minimize Allocations
3. Branch Prediction Hints
4. SIMD for String Operations
For operations like validation:Performance Monitoring
Continuous Benchmarking
Oxc uses CodSpeed for continuous performance monitoring:- Automated benchmarks on every PR
- Regression detection: Flag performance degradations
- Historical tracking: See performance trends over time
Benchmark Suite
Located intasks/benchmark/:
Profiling Tools
For development:Performance Best Practices
When contributing to Oxc:Measure First
Profile before optimizing - measure impact
Avoid Allocations
Use references and iterators when possible
Cache-Friendly
Sequential access patterns, compact structures
Benchmark Changes
Run benchmarks to verify improvements
Common Pitfalls
-
Unnecessary Cloning
-
Repeated Lookups
-
Allocating in Loops
Benchmarks Summary
Parser Performance
Oxc parser is 15-30x faster than Babel and 20-40x faster than TypeScript.
- Speed: 1,200-1,500 files/sec
- Memory: 3-4x less than competitors
- Allocations: 99% fewer individual allocations
Linter Performance
oxlint is 60-120x faster than ESLint on large codebases.
- Speed: ~7,000 files/sec (VSCode benchmark)
- Scaling: Near-linear with core count
- Memory: 4x less than ESLint
Key Techniques
- Arena Allocation: Single allocation per file
- Parallel Processing: Scale with CPU cores
- Zero-Copy Operations: Borrowed references throughout
- Cache Optimization: Sequential memory access
- Minimal Allocations: Reuse buffers and avoid cloning
Further Reading
Architecture Overview
Learn about Oxc’s overall architecture and components
Design Principles
Understand the principles behind Oxc’s design
Contributing
Start contributing to Oxc’s performance
Benchmarks
View live performance benchmarks