Options struct for tuning your Tashi Vertex consensus engine.
Creating options
Initialize with sensible defaults:Event timing
Base minimum event interval
Controls the minimum time between events in microseconds.When to adjust
When to adjust
Increase this value to reduce event frequency in high-latency networks. Decrease for faster consensus in low-latency environments.
Heartbeat interval
When there’s no data to finalize, empty events are created at this interval to keep the session alive. Default: 500 milliseconds (500,000 microseconds)Heartbeats ensure the network stays synchronized even when there are no transactions to process.
Latency thresholds
These settings control how the engine responds to increasing acknowledgment latency.Target acknowledgment latency
When ack latency rises above this threshold, nodes vote that throughput should not increase further. Default: 400 millisecondsMaximum acknowledgment latency
When ack latency rises above this threshold, nodes vote to gradually reduce throughput. Default: 600 millisecondsThrottle acknowledgment latency
When ack latency rises above this threshold, nodes vote to drastically restrict throughput as an emergency measure. Default: 900 millisecondsReset acknowledgment latency
When ack latency rises above this threshold, nodes vote to reset throughput restriction to its initial value as a last-ditch recovery effort. Default: 2000 millisecondsLatency threshold hierarchy
Epoch configuration
Dynamic epoch sizing
When enabled, nodes vote to resize epochs based on network conditions to keep epoch lengths between 1-3 seconds. Default:true
Why dynamic epochs matter
Why dynamic epochs matter
- Rounds may pass at varying speeds depending on network conditions
- Joining/leaving creators must wait out the epoch before address book changes take effect
- Leaving creators don’t want to wait too long
- Joining creators need sufficient time to join successfully
- Dynamic sizing balances these competing needs automatically
Peer management
Fallen behind kick threshold
Number of seconds a creator can fall behind before the node votes to kick them. Set to a negative value to never vote to kick. Default: Not explicitly set (implementation-dependent)This prevents the network from being slowed down by persistently lagging nodes.
Transaction handling
Transaction channel size
Maximum number of transactions to buffer before applying backpressure. Default: 32Maximum unacknowledged bytes
Maximum bytes worth of transactions that haven’t yet been seen by the network to pull from the transaction buffer. Default: 500 MiB (524,288,000 bytes)Threading and performance
Maximum blocking verify threads
Maximum number of threads to spawn for blocking signature verifications. Above a constant threshold, verifications are offloaded to this thread pool instead of using Tokio’s core threads. Default: Number of CPU cores availableThread pool behavior
Thread pool behavior
- Below threshold: verifications use spare compute time in Tokio’s core thread pool
- Above threshold: verifications are sent to this dedicated blocking thread pool
- Cannot be zero or events larger than threshold cannot be verified
State sharing
Enable state sharing
Enables sharing of epoch states with fallen behind creators. Default:false
Epoch states to cache
Number of epoch states to cache when state sharing is enabled. Default: 3If a fallen behind creator fails to download an epoch’s state before it expires from the cache, they’ll have to restart the download.
Network features
Hole punching
Attempt to use UDP hole punching to establish direct connections between creators behind NATs. Default:true
How hole punching works
How hole punching works
When enabled, nodes behind NATs can establish direct peer-to-peer connections by coordinating through public nodes. This improves latency and reduces load on relay nodes.
Report gossip events
Enables reporting of gossip events through the message channel. Default:false
Enable this if you need visibility into gossip-level events for monitoring or debugging.
Configuration examples
High-performance local network
High-latency WAN network
Resource-constrained environment
Development and debugging
Getter methods reference
Every setter has a corresponding getter:Best practices
Start with defaults
The default values work well for most networks. Only adjust after measuring performance.
Match latency to network
Set latency thresholds based on your actual network ping times. Use values higher than uncongested latency.
Test configuration changes
Use a development environment to test configuration changes before deploying to production.