Skip to main content

Arena Allocator

Oxc’s arena allocator is a cornerstone of its performance. By using bump-based arena allocation instead of traditional heap allocation, Oxc achieves 10-50% performance improvements in parsing and analysis operations.

What is Arena Allocation?

Arena allocation (also called region-based memory management) allocates objects in a contiguous memory region called an “arena.” Instead of freeing objects individually, all memory is released at once when the arena is dropped.
Think of it like a notepad: you write notes from top to bottom (bump allocation), and when you’re done with all the notes, you tear out the whole page at once instead of erasing individual notes.

Why Arena Allocation?

Oxc chose arena allocation over reference counting (Rc/Arc) or garbage collection for several key reasons:

Performance

  • 10-50% faster than reference counting
  • Allocation is just incrementing a pointer
  • Deallocation is free (bulk release)
  • Better CPU cache locality

Simplicity

  • No reference counting overhead
  • No cycle detection needed
  • Clear ownership with lifetimes
  • Predictable memory usage

Zero-Copy

  • String slicing without allocation
  • Structural sharing of AST nodes
  • Efficient sub-tree references
  • No deep cloning needed

Memory Efficiency

  • Single allocation per chunk
  • No per-object metadata
  • Compact memory layout
  • Reduced fragmentation

How It Works

Allocator Anatomy

An Allocator consists of one or more memory chunks:
┌─────────────────────────────────────────────────────┐
│                    Allocator                        │
├─────────────────────────────────────────────────────┤
│  Chunk 1 (4KB)      │  Chunk 2 (8KB)   │  Chunk 3  │
│  [▓▓▓▓▓▓▓▓░░░░]    │  [▓▓▓▓▓▓░░░░░░]  │  (16KB)   │
│   ↑ used   ↑ free  │   ↑ used  ↑ free │  [▓░░░░░░]│
└─────────────────────────────────────────────────────┘
     ▓ = allocated memory
     ░ = available memory
  1. Initial state: No chunks allocated
  2. First allocation: Creates initial chunk based on first object size
  3. Growth: When capacity is reached, adds new chunk (exponentially sized)
  4. Lifetime: All allocations live as long as the allocator
  5. Drop: All memory released at once when allocator is dropped

Creating an Allocator

use oxc_allocator::Allocator;

// Create a new allocator
let allocator = Allocator::default();

// Or explicitly
let allocator = Allocator::new();
The allocator starts empty and creates its first chunk lazily on the first allocation.

Allocating Objects

The allocator provides arena-allocated versions of common data structures:

Box

use oxc_allocator::{Allocator, Box};

let allocator = Allocator::default();

// Allocate a single value
let boxed: Box<i32> = Box::new_in(42, &allocator);
assert_eq!(*boxed, 42);

// Box implements Deref, so you can use it like a reference
let value: &i32 = &boxed;
Box does NOT implement Drop. Objects are never individually dropped - memory is released when the allocator is dropped. You cannot allocate Drop types into the arena.

Vec

use oxc_allocator::{Allocator, Vec};

let allocator = Allocator::default();

// Create an empty vector
let mut vec: Vec<i32> = Vec::new_in(&allocator);

// Create with capacity
let mut vec: Vec<i32> = Vec::with_capacity_in(10, &allocator);

vec.push(1);
vec.push(2);
vec.push(3);

assert_eq!(vec.len(), 3);

String

use oxc_allocator::String;

let allocator = Allocator::default();

// Allocate a string
let mut s = String::from_str_in("Hello", &allocator);
s.push_str(" World");

assert_eq!(&*s, "Hello World");

HashMap and HashSet

use oxc_allocator::{Allocator, HashMap, HashSet};

let allocator = Allocator::default();

let mut map: HashMap<&str, i32> = HashMap::new_in(&allocator);
map.insert("answer", 42);

let mut set: HashSet<i32> = HashSet::new_in(&allocator);
set.insert(1);
set.insert(2);

Lifetimes and Ownership

All arena-allocated objects have a lifetime 'a tied to the allocator:
let allocator = Allocator::default();

// AST nodes have lifetime 'a tied to the allocator
let program: Program<'a> = parser.parse().program;

// This won't compile - can't use AST after allocator is dropped:
// drop(allocator);
// println!("{:?}", program);  // ❌ Compile error!
Rust’s lifetime system ensures memory safety. You cannot accidentally use arena-allocated data after the arena is freed.

Zero-Copy Architecture

One of the biggest advantages of arena allocation is zero-copy operations:

String Slicing

let source_text = "const x = 42;";
let allocator = Allocator::default();

let ret = Parser::new(&allocator, source_text, source_type).parse();

// Identifier names are string slices, not copies!
// They point directly into the original source_text
for stmt in ret.program.body {
    if let Statement::VariableDeclaration(decl) = stmt {
        for declarator in &decl.declarations {
            // This is a &str slice into source_text - zero allocation!
            let name: &str = declarator.id.name.as_str();
        }
    }
}

Structural Sharing

// Multiple AST nodes can reference the same data without copying
let allocator = Allocator::default();

let common_type = /* some TypeScript type */;

// These variables can all share references to common_type
let var1_type: &Type = common_type;
let var2_type: &Type = common_type;
let var3_type: &Type = common_type;

// No cloning or copying needed!

Recycling Allocators

For optimal performance, reuse allocators instead of creating new ones:

✅ Good: Reuse with Reset

let mut allocator = Allocator::new();

for source_file in source_files {
    let source_text = read_file(source_file)?;
    
    // Parse using the allocator
    let ret = Parser::new(&allocator, &source_text, source_type).parse();
    
    // Process the AST
    process_ast(&ret.program);
    
    // Reset for next iteration - keeps largest chunk, rewinds cursor
    allocator.reset();
}

❌ Bad: Creating New Allocators

// DON'T DO THIS!
for source_file in source_files {
    let allocator = Allocator::new();  // ❌ Expensive allocation!
    
    let source_text = read_file(source_file)?;
    let ret = Parser::new(&allocator, &source_text, source_type).parse();
    
    process_ast(&ret.program);
    
    // ❌ Expensive deallocation when allocator drops
}

Why Recycling Matters

Avoid System Calls

Creating/dropping allocators involves expensive system calls to the global allocator

CPU Cache Warmth

Reusing the same memory keeps it warm in CPU cache, making accesses much faster

Stable Capacity

After a few iterations, the allocator stabilizes at the right size for your workload

Allocator Pool

For parallel processing, use an AllocatorPool:
use oxc_allocator::AllocatorPool;
use rayon::prelude::*;

// Create a pool of allocators for parallel processing
let pool = AllocatorPool::new();

source_files.par_iter().for_each(|source_file| {
    // Get an allocator from the pool
    let allocator = pool.allocator();
    
    let source_text = read_file(source_file).unwrap();
    let ret = Parser::new(&allocator, &source_text, source_type).parse();
    
    process_ast(&ret.program);
    
    // Allocator is automatically returned to pool when dropped
});

Performance Characteristics

Allocation Speed

// Traditional heap allocation
let boxed = std::boxed::Box::new(value);  // ~20-50 CPU cycles

// Arena allocation
let boxed = oxc_allocator::Box::new_in(value, &allocator);  // ~5-10 CPU cycles
Arena allocation is 2-5x faster because it’s just:
  1. Check if there’s enough space in current chunk
  2. Increment the bump pointer
  3. Write the value

Memory Overhead

Allocation TypePer-Object OverheadCache Locality
Heap (Box)16+ bytes (malloc metadata)Poor (scattered)
Reference Counting (Rc)16 bytes (ref counts)Poor (scattered)
Arena (oxc_allocator::Box)0 bytesExcellent (contiguous)

Benchmark Results

On typical JavaScript files:
  • Parsing: 30-40% faster than tools using reference counting
  • AST traversal: 20-30% faster due to cache locality
  • Memory usage: 15-25% lower due to no per-object metadata

Common Patterns

Building AST Nodes

use oxc_allocator::{Allocator, Box, Vec};
use oxc_ast::ast::*;

let allocator = Allocator::default();

// Create an identifier
let ident = BindingIdentifier {
    node_id: Cell::new(NodeId::DUMMY),
    span: Span::default(),
    name: Atom::from("x"),
    symbol_id: Cell::new(None),
};

// Create a numeric literal
let literal = NumericLiteral {
    span: Span::default(),
    value: 42.0,
    raw: "42",
    base: NumberBase::Decimal,
};

// Build a variable declaration
let declarator = VariableDeclarator {
    span: Span::default(),
    kind: VariableDeclarationKind::Const,
    id: BindingPattern::from(BindingPatternKind::BindingIdentifier(
        Box::new_in(ident, &allocator)
    )),
    init: Some(Expression::NumericLiteral(
        Box::new_in(literal, &allocator)
    )),
    definite: false,
};

Collecting Results

// Build a vector of results in the arena
let mut statements = Vec::new_in(&allocator);

for item in items {
    let stmt = create_statement(item, &allocator);
    statements.push(stmt);
}

// Return the collected statements (zero-copy)
return statements;

Limitations

Cannot allocate Drop types: Types that implement Drop cannot be allocated in the arena because they won’t be dropped when the arena is freed. This is enforced at compile time.
// This won't compile:
let vec = Vec::new_in(&allocator);  // ❌ Vec<T> needs Drop

// Use arena Vec instead:
let vec = oxc_allocator::Vec::new_in(&allocator);  // ✅
Most types in Oxc are designed to not require Drop, making them arena-compatible.

Technical Details

Chunk Growth Strategy

Each new chunk is at least 2x the size of the previous chunk:
  1. First allocation: Chunk sized to fit object
  2. Second chunk: 2x first chunk size
  3. Third chunk: 2x second chunk size (4x first chunk)
  4. And so on…
This exponential growth ensures:
  • Amortized O(1) allocation time
  • Minimal number of chunks
  • Good memory utilization

Reset Behavior

allocator.reset();
Reset:
  1. Keeps only the largest chunk (highest capacity)
  2. Frees all other chunks
  3. Rewinds the cursor to the start of the kept chunk
  4. Next allocation reuses this chunk
This is more efficient than creating a new allocator because:
  • No system calls needed
  • Memory is already the right size for your workload
  • Memory is warm in CPU cache

Next Steps

AST Structure

Learn how AST nodes use the allocator

Visitor Pattern

Learn how to traverse arena-allocated ASTs

Performance Guide

Learn more performance optimization techniques

Build docs developers (and LLMs) love