Torus compression is a technique for reducing proof size by compressing elliptic curve points in Dory polynomial commitments. This optimization can achieve an estimated 3x reduction in proof size with minimal computational overhead.
Torus compression is mentioned in Jolt’s profiling benchmarks but may not be fully implemented in the current version. This page describes the theoretical technique and expected performance impact.
Problem: Large Curve Points
Dory commitments use elliptic curve points from BN254:
// G1 point (uncompressed): 64 bytes
// - x-coordinate: 32 bytes (field element)
// - y-coordinate: 32 bytes (field element)
// G2 point (uncompressed): 128 bytes
// - x-coordinate: 64 bytes (Fp2 element)
// - y-coordinate: 64 bytes (Fp2 element)
A typical Jolt proof contains:
- Polynomial commitments: 50-100 G1 points
- Opening proofs: 20-40 G1 points (Dory tier-1/tier-2)
- BlindFold (ZK mode): 100-200 G1 points (Pedersen commitments, Hyrax row commitments)
Total: ~10-20 KB of curve points
Solution: Torus-Based Compression
Torus compression exploits the algebraic structure of elliptic curves to represent points more compactly.
Mathematical Background
For an elliptic curve point P = (x, y) on a curve y² = x³ + ax + b:
Standard compressed format:
- Store
x (32 bytes) + 1 bit for y sign
- Total: 33 bytes
- Decompression: Solve for
y = ±√(x³ + ax + b)
Torus compression:
- Represent point in projective coordinates:
[X : Y : Z]
- Exploit torus structure:
T² ≅ E[r] × E[r] (for r-torsion points)
- Compress to ~21 bytes (estimated)
- Decompression: More complex but still efficient
Compression Ratio
| Format | G1 Size | G2 Size | Compression Ratio |
|---|
| Uncompressed | 64 bytes | 128 bytes | 1x (baseline) |
| Standard compressed | 33 bytes | 65 bytes | ~2x |
| Torus compressed | ~21 bytes | ~42 bytes | ~3x |
Implementation Strategy
Compression During Serialization
impl<C: JoltCurve> JoltProof<C> {
pub fn serialize_compressed(&self) -> Vec<u8> {
let mut bytes = Vec::new();
// Compress all curve points
for commitment in &self.commitments {
bytes.extend(torus_compress(commitment));
}
// Scalar values (no compression)
for scalar in &self.scalars {
bytes.extend(scalar.to_bytes());
}
bytes
}
pub fn deserialize_compressed(bytes: &[u8]) -> Result<Self, DecompressionError> {
let mut offset = 0;
let mut commitments = Vec::new();
// Decompress curve points
while offset < commitments_end {
let point = torus_decompress(&bytes[offset..offset + TORUS_G1_SIZE])?;
commitments.push(point);
offset += TORUS_G1_SIZE;
}
// ... deserialize scalars ...
Ok(Self { commitments, scalars, ... })
}
}
Compression Levels
Different proof components may use different compression:
pub enum CompressionLevel {
None, // Uncompressed (64 bytes G1)
Standard, // Standard compressed (33 bytes G1)
Torus, // Torus compressed (~21 bytes G1)
}
pub struct CompressionConfig {
commitments: CompressionLevel, // Polynomial commitments
opening_proofs: CompressionLevel, // Dory opening proofs
blindfold: CompressionLevel, // BlindFold ZK proofs
}
Trade-off: Torus decompression is slower than standard decompression, so use it selectively for components that dominate proof size.
Proof Size Reduction
For a typical Jolt proof:
| Component | Count | Uncompressed | Torus Compressed | Savings |
|---|
| Polynomial commitments | 80 | 5,120 bytes | 1,680 bytes | 3,440 bytes |
| Dory opening proof | 30 | 1,920 bytes | 630 bytes | 1,290 bytes |
| BlindFold (ZK) | 150 | 9,600 bytes | 3,150 bytes | 6,450 bytes |
| Sumcheck polynomials | — | 4,000 bytes | 4,000 bytes | 0 (no compression) |
| Total | — | 20,640 bytes | 9,460 bytes | ~2.2x reduction |
Note: Actual compression ratio depends on the specific proof components. Estimated ~3x for curve points alone, ~2-2.5x for full proof (including scalars).
Computational Cost
Compression time (per G1 point):
- Standard compression: ~5 μs
- Torus compression: ~10 μs (estimated)
Decompression time (per G1 point):
- Standard decompression: ~20 μs (requires square root)
- Torus decompression: ~50 μs (estimated, more complex reconstruction)
Impact on prover:
- Proof generation: Minimal overhead (~1-2% increase)
- Proof serialization: Slight overhead from compression
Impact on verifier:
- Proof deserialization: ~2-3x slower decompression
- Verification: No change (works with decompressed points)
Net effect: Worthwhile trade-off for bandwidth-constrained applications (blockchain L1 submission, storage).
Use Cases
When to Use Torus Compression
✅ Good for:
- On-chain proof submission (minimize calldata costs)
- Proof archival (long-term storage)
- Network transmission over slow connections
- Applications where verification is infrequent
❌ Not ideal for:
- High-throughput verification (latency-sensitive)
- Local proof generation and verification
- Applications that re-verify proofs frequently
Hybrid Approach
Use different compression for different stages:
// Stage 1: Generate proof (no compression)
let proof = jolt_prover.prove();
// Stage 2: Serialize for transmission (torus compression)
let compressed_bytes = proof.serialize_compressed(CompressionLevel::Torus);
// Stage 3: Transmit compressed proof
send_to_verifier(compressed_bytes);
// Stage 4: Verifier decompresses and verifies
let proof = JoltProof::deserialize_compressed(&compressed_bytes)?;
verifier.verify(&proof)?;
Implementation Status
Torus compression is not yet fully implemented in the current Jolt codebase. The compression estimates are based on theoretical analysis and benchmarking projections.
Current State
// From jolt-core/benches/e2e_profiling.rs
// Stage 8: Dory opening proof (curve points - benefits from compression)
// println!(" Dory opening: ~{} bytes", dory_bytes);
// Commitments (curve points - benefits from compression)
// println!(" Commitments: ~{} bytes", commitment_bytes);
// Estimate proof size with full Dory compression (assuming ~3x compression ratio)
// let compressed_size = (dory_bytes + commitment_bytes) / 3 + other_bytes;
These comments indicate planned compression support.
Roadmap
Phase 1: Standard Compression (likely implemented)
- Use arkworks’ built-in point compression
- Serialize G1 points as 33 bytes (x-coordinate + sign bit)
- Serialize G2 points as 65 bytes
Phase 2: Torus Compression (planned)
- Implement torus-based compression algorithm
- Integrate with Dory commitment scheme
- Benchmarking and optimization
Phase 3: Adaptive Compression (future)
- Automatically choose compression level based on use case
- Profile-guided compression (compress hot proof components less aggressively)
Other Proof Compression Methods
1. Recursive SNARKs:
- Compress proof by proving its verification in another SNARK
- Can achieve constant-size proofs (~1-2 KB)
- Much higher prover cost (10-100x)
2. Batch Verification:
- Amortize verification cost across multiple proofs
- Doesn’t reduce individual proof size
- Complementary to compression
3. Aggregation:
- Combine multiple proofs into one
- Useful for blockchain rollups
- Can be combined with torus compression
Best Practices
Compression Configuration
// For on-chain submission (minimize size)
let config = CompressionConfig {
commitments: CompressionLevel::Torus,
opening_proofs: CompressionLevel::Torus,
blindfold: CompressionLevel::Torus,
};
// For local/fast verification (minimize latency)
let config = CompressionConfig {
commitments: CompressionLevel::Standard,
opening_proofs: CompressionLevel::Standard,
blindfold: CompressionLevel::None,
};
Error Handling
Torus decompression can fail if the compressed data is invalid:
match JoltProof::deserialize_compressed(&bytes) {
Ok(proof) => verifier.verify(&proof),
Err(DecompressionError(invalid_point)) => {
// Handle invalid compressed point
// This could indicate:
// - Corrupted proof data
// - Malicious proof
// - Wrong compression format
}
}
References