Skip to main content
This guide covers the testing infrastructure for the Avail blockchain node, including unit tests, integration tests, and end-to-end testing.

Quick Start

Run all workspace tests:
cargo test --release --workspace

Unit Tests

Unit tests are located alongside the code they test throughout the workspace.

Running All Unit Tests

1

Run Tests

cargo test --release --workspace
The --release flag enables optimizations, which is recommended for tests involving cryptographic operations.
2

View Test Output

For verbose output showing all test names:
cargo test --release --workspace -- --nocapture

Running Tests for Specific Packages

Test a specific package:
# Test the runtime
cargo test --release -p da-runtime

# Test a specific pallet
cargo test --release -p da-control
cargo test --release -p pallet-mandate
cargo test --release -p pallet-vector

# Test the node
cargo test --release -p avail-node

Running Specific Tests

Run tests matching a pattern:
cargo test --release test_name_pattern
Run a single test:
cargo test --release --package da-runtime --test specific_test_name
Tests run in parallel by default. Use -- --test-threads=1 to run tests sequentially if you encounter issues.

Integration Tests

Integration tests verify interactions between components.

Running Integration Tests

cargo test --release --workspace --tests

End-to-End Tests

E2E tests verify the complete node functionality in a live environment.

Prerequisites

1

Build Node with Fast Runtime

E2E tests require the node built with the fast-runtime feature:
cargo build --release --features fast-runtime
This enables faster block times for efficient testing.
2

Start the Development Node

In a terminal, start the node in development mode:
./target/release/avail-node --dev
Keep this running while executing E2E tests.

Running E2E Tests

In a new terminal:
cd e2e
cargo test -- --test-threads 1
E2E tests must run sequentially (--test-threads 1) to avoid conflicts between test scenarios.

E2E Test Structure

The e2e/ directory contains:
  • Test scenarios for transaction submission
  • Data availability verification tests
  • RPC endpoint tests
  • Chain state validation

Benchmarks

Avail includes several benchmark suites for performance testing.

Runtime Benchmarks

Benchmark runtime operations using Criterion:
cargo bench --bench header_kate_commitment_cri
Benchmark using Divan:
cargo bench --bench header_kate_commitment_divan

Instruction-Level Benchmarks

Benchmark with IAI for CPU instruction counts:
cargo bench --bench header_kate_commitment_iai
Benchmark with IAI Callgrind:
cargo bench --bench header_kate_commitment_iai_callgrind
Instruction-level benchmarks require valgrind to be installed:
sudo apt-get install valgrind

Kate RPC Benchmarks

For Kate RPC benchmarking, you’ll need Deno installed:
1

Start Development Node

./avail-node --dev
2

Run Benchmark Scripts

deno run -A ./examples/deno/benchmarks/query_proof.ts
deno run -A ./examples/deno/benchmarks/query_rows.ts
deno run -A ./examples/deno/benchmarks/query_block_length.ts
deno run -A ./examples/deno/benchmarks/query_data_proof.ts

Code Quality Checks

Formatting

Check code formatting:
cargo fmt --check
Fix formatting issues:
cargo fmt
Format E2E tests:
cargo fmt --manifest-path e2e/Cargo.toml

Linting with Clippy

Run Clippy linter:
cargo clippy
Run Clippy with all warnings as errors:
cargo clippy -- -D warnings

Feature Validation

Verify compilation with specific features:
export SKIP_WASM_BUILD=true
cargo check --release --workspace --features "runtime-benchmarks try-runtime" -p avail-node

Test Coverage

Generate test coverage reports using instrumentation:
RUSTFLAGS="-C instrument-coverage" \
LLVM_PROFILE_FILE="profile-%p-%m.profraw" \
cargo test --release --workspace
Clean up coverage files:
find . -name \*.profraw -type f -exec rm -f {} +
Coverage instrumentation may slow down test execution.

Continuous Integration

The project uses GitHub Actions for automated testing.

CI Test Workflows

# .github/workflows/unit_tests.yml
# Runs on: push to main/develop, pull requests
cargo test --release --workspace

Docker-Based Testing

Run tests in a containerized environment:

Build Test Container

docker build -t availnode -f ./dockerfiles/avail-node.Dockerfile .

Run Tests in Container

docker run --rm availnode cargo test --release --workspace

Common Test Scenarios

Testing a New Feature

1

Write Tests

Add unit tests in the same module or in a tests submodule:
#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_new_feature() {
        // Test implementation
    }
}
2

Run Tests

cargo test --release -p your-package test_new_feature
3

Verify All Tests Pass

cargo test --release --workspace

Testing a Bug Fix

1

Write Regression Test

Create a test that reproduces the bug
2

Verify Test Fails

Confirm the test fails before applying the fix
3

Apply Fix

Implement the bug fix
4

Verify Test Passes

Run the test to confirm the fix works

Troubleshooting

Tests Timing Out

Increase test timeout:
cargo test --release -- --test-timeout 300

Flaky Tests

Run tests multiple times to identify flakiness:
for i in {1..10}; do cargo test --release test_name || break; done

Memory Issues

Run tests with fewer parallel threads:
cargo test --release -- --test-threads=2

E2E Test Failures

Common issues:
  1. Node not running: Ensure the dev node is running with --dev flag
  2. Wrong features: Rebuild node with --features fast-runtime
  3. Port conflicts: Make sure ports 9944 and 30333 are available
  4. Stale state: Stop the node, remove the /tmp/substrate* directory, and restart

Debug Output

Enable detailed logging:
RUST_LOG=debug cargo test --release test_name -- --nocapture
For Substrate-specific logs:
RUST_LOG=runtime=debug,sc_service=debug cargo test --release

Best Practices

Always Use --release

Run tests with --release flag for realistic performance

Test in Isolation

Use --test-threads=1 for tests that modify shared state

Clean Builds

Periodically clean build artifacts with cargo clean

CI Parity

Run the same test commands locally as used in CI

Next Steps

Contributing Guidelines

Learn how to contribute your tests and code

Building from Source

Build the node for development and testing

Build docs developers (and LLMs) love