Skip to main content

Overview

Avail provides comprehensive benchmarking capabilities for measuring pallet performance and Kate RPC operations. Benchmarks help determine runtime weights and identify performance bottlenecks.

Pallet Benchmarking

Prerequisites

Build the node with runtime benchmarking features enabled:
cargo build --release --locked --features=runtime-benchmarks

Running Pallet Benchmarks

1

Benchmark a specific pallet

./target/release/avail-node benchmark pallet \
  --chain=dev \
  --steps=50 \
  --repeat=20 \
  --pallet="pallet_name" \
  --extrinsic="*" \
  --heap-pages=4096 \
  --output ./output/pallet_name.rs
2

Benchmark all pallets

Use the automated benchmark script:
cd ~/workspace/source
./scripts/run_benchmarks.sh
3

Review results

Benchmark outputs are saved to the ./output directory by default.

Benchmark Script Configuration

The run_benchmarks.sh script supports environment variables for customization:
VariableDefaultDescription
STEPS50Number of steps for weight calculation
REPEAT20Number of times to repeat each benchmark
TEMPLATE_PATH./.maintain/frame-weight-template.hbsWeight template file
OUTPUT_PATH./outputOutput directory for benchmark results
PALLETS*Specific pallets to benchmark (or * for all)
OUR_PALLETS-Set to benchmark only custom Avail pallets
Example:
export STEPS=100
export REPEAT=50
export PALLETS="da_control pallet_vector"
./scripts/run_benchmarks.sh

Excluded Pallets

The following pallets are excluded from automatic benchmarking:
  • pallet_election_provider_support_benchmarking (helper pallet)
  • pallet_babe (no automatic benchmarking)
  • pallet_grandpa (no automatic benchmarking)
  • pallet_mmr (no automatic benchmarking)
  • pallet_offences (no automatic benchmarking)

Header Kate Commitment Benchmarks

Avail includes specialized benchmarks for the Kate commitment header builder:

Time Measurement Benchmarks

cargo bench --bench header_kate_commitment_cri
Uses the Criterion framework for detailed time measurements.
cargo bench --bench header_kate_commitment_divan
Uses the Divan framework for performance measurement.

Low-Level Performance Benchmarks

cargo bench --bench header_kate_commitment_iai_callgrind
Measures instructions, cache hits/misses, and main memory accesses using Valgrind’s Callgrind.
cargo bench --bench header_kate_commitment_iai
Alternative IAI benchmark for instruction counts and cache analysis.

Kate RPC Benchmarks

Running Kate RPC Benchmarks

These benchmarks require a running development node and Deno runtime.
1

Start development node

./avail-node --dev
2

Run RPC benchmarks

deno run -A ./examples/deno/benchmarks/query_proof.ts
deno run -A ./examples/deno/benchmarks/query_rows.ts
deno run -A ./examples/deno/benchmarks/query_block_length.ts
deno run -A ./examples/deno/benchmarks/query_data_proof.ts

Available Kate RPC Metrics

When Kate RPC metrics are enabled (--enable-kate-rpc-metrics), the following metrics are exposed:
  • avail_kate_rpc_query_rows_execution_time - Query rows execution time (μs)
  • avail_kate_rpc_query_proof_execution_time - Query proof execution time (μs)
  • avail_kate_rpc_query_block_length_execution_time - Query block length execution time (μs)
  • avail_kate_rpc_query_data_proof_execution_time - Query data proof execution time (μs)

Benchmark Subcommands

The avail-node benchmark command supports multiple subcommands:
avail-node benchmark --help

Common Options

OptionDescription
--chain=<CHAIN>Chain specification (dev, turing, mainnet)
--steps=<N>Number of benchmark steps
--repeat=<N>Number of repetitions
--pallet=<NAME>Pallet to benchmark
--extrinsic=<NAME>Specific extrinsic or * for all
--heap-pages=<N>Heap pages for runtime execution
--output=<PATH>Output file path
--template=<PATH>Weight template file
--header=<PATH>License header file

Troubleshooting

If benchmarks fail, check the error log at ./output/benchmarking_errors.txt.

Common Issues

Build failures:
# Ensure runtime-benchmarks feature is enabled
cargo build --release --locked --features=runtime-benchmarks
Out of memory errors:
# Reduce the number of heap pages or steps
--heap-pages=2048 --steps=25
Pallet not found:
# List all available pallets
./target/release/avail-node benchmark pallet --list --chain=dev

Best Practices

  1. Consistent environment - Run benchmarks on the same hardware for comparable results
  2. Minimal load - Close other applications to avoid interference
  3. Multiple runs - Use --repeat=20 or higher for statistical significance
  4. Sufficient steps - Use --steps=50 minimum for accurate weight curves
  5. Documentation - Record hardware specs and benchmark parameters for reproducibility

Build docs developers (and LLMs) love