Effective queue management is critical for maintaining Light Protocol’s tree liveness and ensuring timely processing of compressed account operations.
Queue Architecture
Light Protocol uses separate queues for different tree operations:
Queue Types
State Tree Queues
Queue Operation Items Purpose Input (V2) Nullify Nullifier hashes Spend compressed accounts Output (V2) Append Leaf hashes Create compressed accounts V1 Queue Mixed Nullifiers + leaves Legacy single-item processing
Address Tree Queues
Queue Operation Items Purpose Address (V2) Append Addresses + proofs Ensure address uniqueness V1 Queue Append Addresses Legacy single-item processing
V2 Queue Structure
V2 queues use a batched structure for efficient processing:
pub struct BatchMetadata {
pub next_index : u64 , // Total items ever added
pub pending_batch_index : u64 , // Next batch to process
pub zkp_batch_size : u64 , // Items per batch
pub batches : Vec < Batch >, // Active batches
}
pub struct Batch {
pub state : BatchState , // Fill, Inserted, Proven, Full
pub num_inserted : u64 , // Completed ZK batches
pub current_index : u64 , // Current ZK batch index
pub zkp_batch_size : u64 , // Items per ZK proof
}
Batch States
Fill
Batch is actively receiving new items from applications
Inserted
Batch is full and ready for ZK proof generation
Proven
ZK proof generated but not yet submitted on-chain
Full
Batch is complete, all proofs submitted successfully
Queue Processing Strategy
V2 State Tree Processing
The forester processes state trees with separate strategies for input and output queues:
Output Queue (Append)
Fetch Batches
Build Circuit Inputs
Submit Transaction
// Query pending batches from output queue
let pending_batches = indexer
. get_pending_batches ( output_queue_pubkey )
. await ? ;
// Process up to max_batches_per_tree
for batch in pending_batches . iter () . take ( max_batches_per_tree ) {
process_append_batch ( batch ) . await ? ;
}
Fetch Batches
Query Merkle Proofs
Build and Submit
// Query pending batches from input queue (on tree account)
let merkle_tree = fetch_merkle_tree_account ( tree_pubkey ) . await ? ;
let pending_batches = merkle_tree . queue_batches . batches
. iter ()
. filter ( | b | b . state == BatchState :: Inserted )
. collect ();
V2 Address Tree Processing
Fetch and Process
Build Inputs
// Query pending address batches
let merkle_tree = fetch_address_tree_account ( tree_pubkey ) . await ? ;
let pending_batches = merkle_tree . queue_batches . batches
. iter ()
. filter ( | b | b . state == BatchState :: Inserted )
. collect ();
for batch in pending_batches . iter () . take ( max_batches_per_tree ) {
process_address_batch ( batch ) . await ? ;
}
Batch Optimization
Batch Size Selection
Choose batch sizes based on workload characteristics:
Batch Size Proof Time Throughput Best For 10 ~1-2s Low Testing, development 100 ~3-5s Medium Moderate load 500 ~10-15s High High volume
Larger batches are more efficient but have higher latency. Choose based on your application’s latency vs throughput requirements.
Concurrent Batch Processing
# Process multiple batches per tree in parallel
forester start \
--max-batches-per-tree 4 \
--transaction-max-concurrent-batches 20
Recommendations :
max-batches-per-tree: 2-4 for most workloads
transaction-max-concurrent-batches: 10-20 for balanced throughput
Higher values increase resource usage (memory, RPC load)
Proof Generation Pipeline
Optimize proof generation workflow:
Parallel Proof Requests
Submit multiple proof requests to the prover concurrently
Pipeline Stages
While proofs are generating, fetch data for next batches
Transaction Batching
Submit transactions as soon as proofs are ready (don’t wait for all)
Async Confirmation
Poll transaction confirmations in parallel
Liveness Monitoring
Queue Depth Tracking
Monitor pending items across all queues:
// Fetch queue status
let state_output_info = get_state_v2_output_queue_info (
& mut rpc ,
& output_queue_pubkey
) . await ? ;
let state_input_info = parse_state_v2_queue_info (
& merkle_tree ,
& mut output_queue_data
) . await ? ;
let address_info = get_address_v2_queue_info (
& mut rpc ,
& address_tree_pubkey
) . await ? ;
// Calculate pending items
let total_pending =
state_output_info . output_pending_batches * batch_size +
state_input_info . input_pending_batches * batch_size +
address_info . input_pending_batches * batch_size ;
Liveness Metrics
Queue Processing Rate
rate(forester_queue_items_processed_total[5m])
Queue Depth
forester_queue_pending_items{tree_type="state", tree_version="v2"}
Processing Lag
(
forester_queue_next_index -
forester_queue_completed_index
) / forester_queue_processing_rate
Target Metrics :
Queue depth: < 1000 items
Processing rate: > 100 items/sec
Processing lag: < 60 seconds
Alert Rules
groups :
- name : forester_liveness
rules :
- alert : QueueDepthHigh
expr : forester_queue_pending_items > 5000
for : 5m
annotations :
summary : "High queue depth on {{ $labels.tree_type }}"
- alert : ProcessingStalled
expr : rate(forester_queue_items_processed_total[5m]) < 10
for : 10m
annotations :
summary : "Forester processing rate dropped"
- alert : LowSolBalance
expr : forester_sol_balance < 0.1
for : 1m
annotations :
summary : "Forester SOL balance critically low"
Cache Management
The forester uses caching to prevent duplicate processing:
Transaction Deduplication Cache
forester start --tx-cache-ttl-seconds 180
Purpose : Prevent re-processing the same transaction signature
TTL : 180 seconds (3 minutes) default
Use Case : Multiple foresters processing same queues
Operations Cache
forester start --ops-cache-ttl-seconds 180
Purpose : Cache batch operation status to avoid redundant queries
TTL : 180 seconds default
Use Case : Reduce indexer load for frequently checked batches
Caches must expire before epoch transitions to prevent stale data. Keep TTL below epoch duration.
Proof Result Caching
The forester implements proof caching to avoid regenerating identical proofs:
// Shared proof cache across processors
let proof_cache = Arc :: new ( DashMap :: new ());
// Before generating proof, check cache
let cache_key = hash ( & circuit_inputs );
if let Some ( cached_proof ) = proof_cache . get ( & cache_key ) {
return Ok ( cached_proof . clone ());
}
// Generate proof and cache result
let proof = prover . generate_proof ( circuit_inputs ) . await ? ;
proof_cache . insert ( cache_key , proof . clone ());
Benefits :
Avoid duplicate proof generation
Reduce prover load
Faster processing for common patterns
Error Handling
Transient Errors
Handle temporary failures with retries:
// RPC errors
match error {
RpcError :: Timeout ( _ ) |
RpcError :: ConnectionError ( _ ) => {
// Retry with exponential backoff
retry_with_backoff ( operation ) . await ?
},
_ => return Err ( error ),
}
// Prover errors
match error {
ProverError :: JobNotFound ( _ ) => {
// Resubmit proof request
submit_proof_request ( inputs ) . await ?
},
ProverError :: Timeout ( _ ) => {
// Increase timeout and retry
retry_with_longer_timeout ( inputs ) . await ?
},
_ => return Err ( error ),
}
Permanent Errors
Skip invalid operations and log for investigation:
match error {
// Constraint errors indicate invalid inputs
ProverError :: ConstraintError ( _ ) => {
error! ( "Invalid circuit inputs for batch {}: {}" , batch_index , error );
// Skip this batch, don't retry
return Ok (());
},
// Tree state errors
ForesterError :: TreeFull |
ForesterError :: TreeNeedsRollover => {
warn! ( "Tree {} needs rollover" , tree_pubkey );
// Attempt rollover or skip tree
attempt_rollover ( tree_pubkey ) . await ? ;
},
_ => return Err ( error ),
}
Tree Rollover
When trees reach capacity, they must be rolled over:
Rollover Detection
if is_tree_ready_for_rollover ( & tree_account , current_slot ) {
info! ( "Tree {} ready for rollover" , tree_pubkey );
perform_tree_rollover ( tree_pubkey ) . await ? ;
}
Rollover Process
Detect Rollover
Check if tree has reached capacity or rollover threshold
Create New Tree
Initialize new tree account with same parameters
Update Registry
Register new tree in the protocol registry
Migrate Queue
Point queue processing to new tree
Mark Old Tree
Set old tree as read-only, prevent new insertions
Priority Fee Management
Dynamic priority fees ensure transactions land during congestion:
forester start --enable-priority-fees true
Fee Calculation Strategy
// Query recent prioritization fees
let recent_fees = rpc . get_recent_prioritization_fees ( & []) . await ? ;
// Calculate percentile-based fee
let p75_fee = calculate_percentile ( & recent_fees , 75 );
let p90_fee = calculate_percentile ( & recent_fees , 90 );
// Use higher fee for critical transactions
let priority_fee = if is_critical {
p90_fee
} else {
p75_fee
};
// Add to transaction
transaction . add_priority_fee ( priority_fee );
Enable priority fees in production to ensure timely transaction processing during network congestion.
Multi-Forester Coordination
Run multiple foresters for redundancy and load distribution:
Strategies
1. Tree-Based Sharding
# Forester A: Process first half of trees
forester start \
--tree-id TREE_1 \
--tree-id TREE_2
# Forester B: Process second half
forester start \
--tree-id TREE_3 \
--tree-id TREE_4
2. Authority-Based Sharding
# Forester A: Process authority 1 trees
forester start --group-authority AUTHORITY_1
# Forester B: Process authority 2 trees
forester start --group-authority AUTHORITY_2
3. Redundant Processing
# Both foresters process all trees
# Transaction deduplication prevents conflicts
forester start --tx-cache-ttl-seconds 300 # Forester A
forester start --tx-cache-ttl-seconds 300 # Forester B
Coordination Mechanisms
Transaction Deduplication
Each forester checks cache before submitting
Recent transaction signatures stored in shared cache
Prevents duplicate submissions
Epoch Slot Assignment
Foresters assigned specific time slots in epoch
Only process trees during assigned slots
Natural coordination through protocol design
Best Practices
Monitoring
Track queue depth continuously
Alert on processing rate drops
Monitor proof generation times
Watch SOL balance closely
Performance
Use appropriate batch sizes
Enable priority fees in production
Tune concurrent batch processing
Cache proof results
Reliability
Run multiple forester instances
Handle transient errors gracefully
Implement transaction deduplication
Auto-recover from failures
Resource Management
Monitor memory usage
Tune RPC pool size
Limit concurrent operations
Use cache TTLs appropriately
Next Steps
Prover Setup Optimize prover configuration for your workload
Monitoring Set up comprehensive monitoring and alerting