Skip to main content
Memory checking and lookup arguments are fundamental techniques in zkVM design. They allow efficient verification that memory accesses are consistent (reads return previously written values) and that operations correctly reference precomputed tables.

Overview

In a zkVM like Jolt, the prover must convince the verifier of two key properties:
  1. Memory consistency: Every read from memory (RAM or registers) returns the value from the most recent write to that address
  2. Lookup correctness: Instruction outputs correctly reference precomputed lookup tables
Naively checking these properties would require the verifier to track the full memory state, which is prohibitively expensive. Memory checking techniques allow the verifier to verify these properties with only logarithmic overhead.

Memory Consistency Problem

Consider an execution trace with TT memory operations (reads and writes) across an address space of size KK. The verifier must ensure:
  • For each read at address aa at time tt, the value read equals the value from the most recent write to aa before time tt
  • If no prior write exists, the read returns the initial value at address aa
A naive approach would track all KK memory locations throughout execution, requiring O(KT)O(K \cdot T) work. Memory checking reduces this to O(TlogK)O(T \log K).

Offline Memory Checking

Offline memory checking (introduced by Blum et al.) uses a simple observation: if we sort all memory accesses by address and timestamp, consistency can be verified locally.

High-Level Approach

  1. For each memory operation, record a tuple (addr,time,value,is_write)(\text{addr}, \text{time}, \text{value}, \text{is\_write})
  2. Sort tuples by (addr,time)(\text{addr}, \text{time})
  3. Check that for consecutive accesses to the same address:
    • A read returns the value from the previous write
    • The first access to an address reads the initial value

Challenge in SNARKs

Sorting is expensive in a SNARK. Instead, Jolt uses multiset equality checks via polynomial commitments.

Multiset Equality via Grand Product

To prove two multisets AA and BB are equal without sorting:
  1. Represent each multiset as the roots of a polynomial:
    • AA corresponds to aA(Xa)\prod_{a \in A} (X - a)
    • BB corresponds to bB(Xb)\prod_{b \in B} (X - b)
  2. Evaluate both products at a random challenge point γ\gamma
  3. If the products are equal, the multisets are equal (except with negligible probability)
This can be computed efficiently using a running product that updates at each step.

Read-Write Memory Checking in Jolt

Jolt implements memory checking for both RAM and registers using a variant of the offline memory checking technique.

Memory Tuples

For each memory operation, Jolt tracks:
  • Address: Which memory location is accessed
  • Timestamp: When the access occurs (cycle number)
  • Value: The data read or written
  • Operation type: Read or write

Grand Product Argument

The prover constructs two multisets:
  1. Read set: All memory reads, each tagged with (addr,time,value)(\text{addr}, \text{time}, \text{value})
  2. Write set: All memory writes (plus initial values), tagged similarly
The prover shows these multisets are equal via a grand product sumcheck.

Implementation

The memory checking implementation lives in jolt-core/src/zkvm/:
  • ram/read_write_checking.rs: RAM consistency verification
  • registers/: Register read-write checking
  • instruction_lookups/: Instruction table lookups

Lookup Arguments

Lookup arguments prove that values appear in a precomputed table. This is critical for Jolt’s instruction execution model.

Lasso and Jolt’s Lookup Argument

Jolt uses the Lasso lookup argument, which:
  1. Decomposes large table lookups into smaller subtable lookups
  2. Uses offline memory checking on the subtables
  3. Achieves prover time linear in the trace length (not table size)

Twist and Shout

Jolt’s implementation uses the “Twist and Shout” optimization:
  • Twist: Decompose large lookups into small chunks
  • Shout: Use one-hot encodings for read addresses to avoid materializing full polynomials
This reduces memory usage from O(KT)O(K \cdot T) to O(K1/cT)O(K^{1/c} \cdot T) where cc is the decomposition factor.

Read-After-Write (RAF) Polynomials

Jolt uses RAF polynomials to encode lookup operations:
  • ra~\widetilde{\mathsf{ra}}: Read address (one-hot encoding of which table entry is accessed)
  • rd~\widetilde{\mathsf{rd}}: Read data (the value returned from the lookup)
The RAF sumcheck proves that rd~\widetilde{\mathsf{rd}} correctly retrieves the value at the address specified by ra~\widetilde{\mathsf{ra}} from the committed table.

Increment Counters

Jolt also tracks increment counters for registers and RAM:
  • RdInc: Counts writes to each destination register
  • RamInc: Counts writes to each RAM address
These counters ensure that the read-write multisets are properly balanced and enable efficient virtual polynomial construction.

Polynomial Representations

Jolt uses specialized polynomial representations for memory checking:
  • OneHotPolynomial: Stores only indices of nonzero entries instead of full vectors
  • RaPolynomial: Lazily materializes read address polynomials across sumcheck rounds
  • SharedRaPolynomials: Shares equality tables across multiple polynomials for memory efficiency
These representations keep memory usage proportional to address space size rather than trace length.

Further Reading

For more details on memory checking and lookup arguments:
This page provides a high-level overview of memory checking in Jolt. For detailed implementation specifics, refer to the source code in jolt-core/src/zkvm/ram/ and jolt-core/src/zkvm/instruction_lookups/.

Build docs developers (and LLMs) love