Skip to main content

Overview

Just-in-time (JIT) compilation transforms JavaScript from interpreted bytecode into optimized machine code at runtime. Modern JavaScript engines use sophisticated profiling and optimization techniques to identify and accelerate hot code paths, achieving near-native performance.
JIT compilation is optional in engine implementation but critical for production performance. Engines like V8, SpiderMonkey, and JavaScriptCore use multi-tier JIT systems that balance compilation cost with execution speed.

JIT compilation pipeline

Modern engines use a multi-tier approach that progressively optimizes hot code:
┌─────────────────────────────────────────────────────┐
│                Source Code                          │
└──────────────────┬──────────────────────────────────┘
                   v
         ┌─────────────────────┐
         │   Parser + AST      │
         └──────────┬──────────┘
                    v
         ┌─────────────────────┐
         │  Bytecode Generator │
         └──────────┬──────────┘
                    v
    ┌───────────────────────────────────┐
    │      Interpreter (Ignition)       │  <- Fast startup
    │   Execute bytecode + collect      │     No compilation
    │      profiling data               │
    └──────────┬───────────────┬────────┘
               │               │
               │ Hot?          │ Very hot?
               v               v
    ┌──────────────┐  ┌────────────────┐
    │ Baseline JIT │  │ Optimizing JIT │
    │ (Sparkplug)  │  │   (TurboFan)   │
    │              │  │                │
    │ Quick        │  │ Aggressive     │
    │ compilation  │  │ optimization   │
    │ + profiling  │  │ using profile  │
    └──────────────┘  └────────┬───────┘

                        Assumptions
                        violated?

                               v
                      ┌─────────────────┐
                      │  Deoptimization │
                      │  Back to        │
                      │  interpreter    │
                      └─────────────────┘
Ignition (V8’s interpreter)
  • Executes bytecode directly
  • Fast startup (no compilation delay)
  • Collects type feedback for later optimization
  • Low memory footprint
// First execution runs in interpreter
function add(a, b) {
  return a + b;
}

add(1, 2); // Interpreter
Compilation has cost. Engines carefully balance when to trigger JIT compilation based on function hotness (call frequency) and compilation cost.

Inline caching

Inline caching (IC) is the most important JIT optimization. It speeds up property access by caching location information from previous lookups.

Property access without IC

Without caching, every property access requires a full lookup:
// Slow: Full prototype chain lookup every time
function getName(obj) {
  return obj.name; // Property lookup algorithm:
                   // 1. Check obj own properties
                   // 2. Check obj.__proto__
                   // 3. Check obj.__proto__.__proto__
                   // ... until found or null
}

Monomorphic inline cache

Monomorphic IC handles one object shape (hidden class):
const user1 = { name: 'Alice', age: 30 };
const user2 = { name: 'Bob', age: 25 };

function getName(obj) {
  return obj.name;
}

// First call: IC miss, perform full lookup
getName(user1); // Cache: { shape: Shape1, offset: 0 }

// Subsequent calls: IC hit (fast path)
getName(user2); // Same shape, use offset 0 directly

// Conceptual inline cache:
// if (obj.shape === Shape1) {
//   return obj[offset: 0]; // Direct memory access!
// } else {
//   fallback_to_full_lookup();
// }
Monomorphic IC can be 10-100x faster than prototype chain traversal by turning property access into a direct memory read at a fixed offset.

Polymorphic inline cache

Polymorphic IC handles 2-4 different shapes:
class User {
  constructor(name) {
    this.name = name;
  }
}

class Admin {
  constructor(name) {
    this.name = name;
    this.role = 'admin';
  }
}

function getName(obj) {
  return obj.name;
}

const user = new User('Alice');
const admin = new Admin('Bob');

// First call: User shape
getName(user); // IC: [{ shape: UserShape, offset: 0 }]

// Second call: Admin shape
getName(admin); // IC: [
                //   { shape: UserShape, offset: 0 },
                //   { shape: AdminShape, offset: 0 }
                // ]

// Conceptual polymorphic IC:
// if (obj.shape === UserShape) {
//   return obj[offset: 0];
// } else if (obj.shape === AdminShape) {
//   return obj[offset: 0];
// } else {
//   fallback_to_full_lookup();
// }

Megamorphic inline cache

When >4 shapes are seen, the IC becomes megamorphic (falls back to generic lookup):
function getName(obj) {
  return obj.name;
}

// Called with many different shapes
for (let i = 0; i < 10; i++) {
  const obj = {};
  obj[`prop${i}`] = i; // Different shape each time!
  obj.name = `name${i}`;
  getName(obj); // IC becomes megamorphic after 4-5 different shapes
}

// Performance cliff: 10x slower than monomorphic
Avoid megamorphic ICs:
  • Use consistent object shapes (same properties, same order)
  • Initialize all properties in constructor
  • Avoid dynamically adding/deleting properties
  • Use separate functions for different types

IC optimization strategies

// BAD: Inconsistent shapes
function createUser(name, age) {
  const user = { name };
  if (age) {
    user.age = age; // Different shape!
  }
  return user;
}

// GOOD: Consistent shapes
function createUser(name, age) {
  return {
    name,
    age: age ?? null // Always same properties
  };
}

Type feedback and speculative optimization

Engines collect type information during execution and use it to generate specialized, fast code.

Type feedback collection

function add(a, b) {
  return a + b;
}

// Interpreter tracks types seen:
add(1, 2);        // Feedback: int + int -> int
add(3, 4);        // Feedback: int + int -> int
add(5, 6);        // Feedback: int + int -> int

// After profiling, optimizing JIT generates:
// Specialized integer addition (single CPU instruction)
// Guards check inputs are integers

Speculative optimization

The JIT compiler makes assumptions based on profiled types and inserts guards:
1

Collect feedback

Interpreter observes types during execution
2

Speculate

Assume types will remain consistent
3

Optimize

Generate specialized fast code
4

Guard

Insert type checks at function entry
5

Deoptimize

If guards fail, fall back to interpreter
function add(a, b) {
  return a + b;
}

// Optimized version (pseudocode):
function add_optimized(a, b) {
  // Guard: Check assumptions
  if (typeof a !== 'number' || typeof b !== 'number') {
    deoptimize(); // Bailout to interpreter
    return add_unoptimized(a, b);
  }
  
  // Fast path: Integer addition
  return int_add(a, b); // Single CPU instruction
}

Type specialization examples

function calculate(x) {
  return x * 2 + 1;
}

// Integer feedback
for (let i = 0; i < 10000; i++) {
  calculate(i); // JIT sees only integers
}

// Optimized to:
// int_multiply(x, 2) + 1
// (2-3 CPU instructions)

// Deoptimizes if called with:
calculate(3.14);      // Float
calculate("5");       // String
calculate(BigInt(5)); // BigInt
Type-stable code can be 10-100x faster than type-unstable code. Keep function inputs and internal types consistent.

Deoptimization and bailout mechanisms

When assumptions are violated, optimized code must deoptimize back to the interpreter.

What triggers deoptimization

function multiply(x, y) {
  return x * y;
}

// Optimized for integers
for (let i = 0; i < 10000; i++) {
  multiply(i, 2);
}

// Deoptimization triggers:
multiply(3.14, 2);        // 1. Type change (int -> float)
multiply("5", 2);         // 2. Type change (int -> string)
multiply(2, undefined);   // 3. Type change (int -> undefined)

Deoptimization process

1

Guard failure

Type check or assumption fails in optimized code
2

State reconstruction

Reconstruct interpreter state from optimized frame
3

Bailout

Jump back to interpreter at correct position
4

Continue execution

Interpreter continues with correct state
5

Remark for optimization

Function may be re-optimized with new feedback
// Deoptimization example
function add(a, b) {
  return a + b; // Line 2
}

// Optimized for integers
add(1, 2);
add(2, 3);

// Deoptimization:
add("hello", "world");

// Engine:
// 1. Detects type mismatch in optimized code
// 2. Reconstructs interpreter state:
//    - PC: line 2
//    - Variables: a="hello", b="world"
// 3. Jumps to interpreter
// 4. Interpreter completes string concatenation
// 5. Function marked for re-optimization with new type info

Deoptimization costs

Deoptimization is expensive:
  • State reconstruction overhead (100-1000 cycles)
  • Lost optimization (back to slow interpreter)
  • Potential re-optimization cost
  • Repeated deoptimization may prevent future optimization
// Soft deopt: Rare path, can re-optimize
function process(x) {
  if (typeof x === 'number') {
    return x * 2; // Hot path (optimized)
  }
  return String(x); // Cold path (rare)
}

// 99.9% called with numbers
for (let i = 0; i < 10000; i++) {
  process(i);
}

// Occasional string doesn't break optimization
process("hello"); // Soft deopt, then continues

Register allocation

Optimizing JIT compilers allocate CPU registers to variables for maximum performance.

Register vs memory access

Performance comparison:

Register access: 1 cycle
L1 cache:        ~4 cycles
L2 cache:        ~12 cycles
L3 cache:        ~40 cycles
RAM:             ~100+ cycles

Goal: Keep hot variables in registers

Register allocation strategies

// Linear scan allocator (fast compilation)
function calculate(a, b, c) {
  const x = a + b;  // x -> register r1
  const y = b + c;  // y -> register r2
  const z = x * y;  // z -> register r3
  return z;
}

// Assembly-like output:
// r1 = add r_a, r_b
// r2 = add r_b, r_c
// r3 = mul r1, r2
// return r3
Characteristics:
  • Fast allocation algorithm
  • Single pass over code
  • Used in baseline JIT
  • May spill to stack under register pressure

Optimizing for register allocation

// BAD: Long variable lifetimes
function processData(data) {
  const temp1 = data.map(x => x * 2);
  const temp2 = data.map(x => x + 1);
  const temp3 = data.map(x => x - 1);
  // All temps live simultaneously - register pressure!
  return temp1.concat(temp2).concat(temp3);
}

// GOOD: Short variable lifetimes
function processData(data) {
  let result = [];
  result = result.concat(data.map(x => x * 2)); // temp1 dies
  result = result.concat(data.map(x => x + 1)); // temp2 dies
  result = result.concat(data.map(x => x - 1)); // temp3 dies
  return result;
}

// BETTER: Eliminate temporaries
function processData(data) {
  return data.flatMap(x => [x * 2, x + 1, x - 1]);
}

JIT optimization best practices

// 1. Use consistent types
function add(a, b) {
  return a + b; // Keep inputs same type
}

// 2. Avoid polymorphism
class Point {
  constructor(x, y) {
    this.x = x; // Always initialize all properties
    this.y = y; // in same order
  }
}

// 3. Use monomorphic call sites
function process(obj) {
  return obj.value; // Call with same object shape
}

// 4. Prefer arrays over arguments
function sum(...args) { // Good: real array
  return args.reduce((a, b) => a + b, 0);
}

// 5. Avoid eval and with
// (prevents many optimizations)

Advanced optimization techniques

// Before inlining
function add(a, b) {
  return a + b;
}

function calculate(x) {
  return add(x, 5) * 2;
}

// After inlining (by JIT)
function calculate_optimized(x) {
  // add() body inlined
  return (x + 5) * 2;
  // Benefits:
  // - No function call overhead
  // - More optimization opportunities
  // - Better register allocation
}

// Inlining heuristics:
// - Small functions (&lt;600 bytecode bytes)
// - Monomorphic call sites
// - Not too deep nesting
Modern JIT compilers apply dozens of optimization passes. Understanding the most impactful ones helps you write faster code.

Performance monitoring

// Check function optimization status (V8)
function isOptimized(fn) {
  // Requires --allow-natives-syntax flag
  const status = %GetOptimizationStatus(fn);
  
  return {
    optimized: (status & (1 << 0)) !== 0,
    neverOptimized: (status & (1 << 1)) !== 0,
    alwaysOptimized: (status & (1 << 2)) !== 0,
    maybeDeopted: (status & (1 << 3)) !== 0,
    optimizedButMarked: (status & (1 << 4)) !== 0,
    turbofanned: (status & (1 << 5)) !== 0,
  };
}

// Usage:
// node --allow-natives-syntax script.js
function hotFunction(x) {
  return x * 2;
}

// Warm up
for (let i = 0; i < 10000; i++) hotFunction(i);

console.log(isOptimized(hotFunction));
// { optimized: true, turbofanned: true, ... }

Next steps

Garbage collection

Learn memory management and GC optimization strategies

Event loop

Master asynchronous JavaScript execution and task scheduling

Build docs developers (and LLMs) love