Graph Node has a comprehensive test suite covering unit tests, runner tests (integration-style tests), and full integration tests. This guide explains how to run each type of test and when to use them.
Use unit tests for regular development. Only run integration tests when explicitly needed or when making changes to integration/end-to-end functionality.
Test Types Overview
Graph Node uses three types of tests:
Unit Tests : Fast, focused tests inlined with source code
Runner Tests : Medium-speed integration-style tests for subgraph execution
Integration Tests : Full end-to-end tests with real services
Unit Tests
Unit tests are inlined with the source code and test individual functions and modules in isolation.
Prerequisites
Start PostgreSQL
PostgreSQL must be running on localhost:5432 with an initialized graph-test database. Using Process Compose (recommended): Or manually: psql -U postgres << EOF
create user graph with password 'graph';
create database "graph-test" with owner=graph template=template0 encoding='UTF8' locale='C';
create extension pg_trgm;
create extension btree_gist;
create extension postgres_fdw;
EOF
Start IPFS
IPFS must be running on localhost:5001.
Install Additional Tools
Install PNPM and Foundry: # PNPM
npm install -g pnpm
# Foundry (for smart contract compilation)
curl -L https://foundry.paradigm.xyz | bash
foundryup
Set Environment Variable
export THEGRAPH_STORE_POSTGRES_DIESEL_URL = postgresql :// graph : graph @ 127 . 0 . 0 . 1 : 5432 / graph-test
Running Unit Tests
All Tests
Specific Tests
With Output
# Run all unit tests
just test-unit
Test Verification Requirement : When filtering for specific tests, ensure the intended test name(s) appear in the output. Cargo can exit successfully even when no tests matched your filter.
Unit Test Best Practices
Write unit tests for all new functions and modules
Keep tests focused on a single behavior
Use descriptive test names that explain what is being tested
Mock external dependencies when possible
Tests should be fast (< 1 second each)
Runner Tests
Runner tests are integration-style tests that test subgraph execution with real services but in a controlled environment.
Prerequisites
Runner tests use the same prerequisites as unit tests:
PostgreSQL running on localhost:5432 (with initialized graph-test database)
IPFS running on localhost:5001
PNPM installed
Foundry installed
Environment variable THEGRAPH_STORE_POSTGRES_DIESEL_URL set
Runner tests use the same Nix services stack as unit tests:
Running Runner Tests
# Run all runner tests
just test-runner
Runner Test Characteristics
Take moderate time (10-20 seconds)
Automatically reset the database between runs
Some tests can pass without IPFS, but tests involving file data sources require it
Test real subgraph execution with compiled WASM
Test Verification Requirement : When filtering for specific tests, ensure the intended test name(s) appear in the output.
Integration Tests
Only run integration tests when explicitly needed:
Making changes to integration/end-to-end functionality
Debugging issues requiring full system testing
Preparing releases or major changes
Integration tests take several minutes to complete.
Integration tests run Graph Node with real blockchain nodes and test the complete indexing pipeline.
Prerequisites
Start PostgreSQL
PostgreSQL must be running on localhost:3011 with an initialized graph-node database. Using Process Compose (recommended):
Start IPFS
IPFS must be running on localhost:3001. Included in Process Compose setup above.
Start Anvil
Anvil (Ethereum test chain) must be running on localhost:3021. Included in Process Compose setup above.
Install Tools
Install PNPM and Foundry as described in the unit tests section.
Running Integration Tests
All Tests
Specific Test Case
With Graph CLI
# Run all integration tests
# Automatically builds graph-node and gnd
just test-integration
Integration Test Verification
Critical Verification Requirements:
ALWAYS verify tests actually ran : Check the output for “test result: ok. X passed” where X > 0
If output shows “0 passed” or “0 tests run” : The TEST_CASE variable or filter was wrong - fix and re-run
Never trust exit code 0 alone : Cargo can exit successfully even when no tests matched your filter
Integration Test Logs
Logs are written to tests/integration-tests/graph-node.log for debugging:
# View logs during test run
tail -f tests/integration-tests/graph-node.log
# Search for errors
grep ERROR tests/integration-tests/graph-node.log
Service Configuration
Port Mapping
Service Unit Tests Port Integration Tests Port Database/Config PostgreSQL 5432 3011 graph-test / graph-nodeIPFS 5001 3001 Data in ./.data/unit or ./.data/integration Anvil (Ethereum) - 3021 Deterministic test chain
Process Compose Services
The repository includes Process Compose configurations for managing test services:
Unit/Runner Tests
Integration Tests
# Start PostgreSQL + IPFS for unit/runner tests
nix run .#unit
Code Quality Checks
Mandatory before ANY commit:
cargo fmt --all MUST be run
just lint MUST show zero warnings
cargo check --release MUST complete successfully
Unit test suite MUST pass
Running Quality Checks
Format Code
# 🚨 MANDATORY: Format all code after any .rs file edit
just format
Lint Code
# 🚨 MANDATORY: Check for warnings and errors
just lint
This must show zero warnings before committing.
Check Release Build
# 🚨 MANDATORY: Catch linking/optimization issues
just check --release
This catches issues that cargo check alone might miss.
Run Tests
# 🚨 MANDATORY: Ensure tests pass
just test-unit
Development Workflow
Continuous Testing During Development
Use cargo-watch to automatically run checks during development:
# Install cargo-watch
cargo install cargo-watch
# Run continuous testing
cargo watch \
-x "fmt --all" \
-x check \
-x "test -- --test-threads=1" \
-x "doc --no-deps"
This will continuously:
Format all source files
Check for compilation errors
Run tests
Generate documentation
Test-Driven Development
Write Test First
Write a failing test that describes the desired behavior: #[test]
fn test_new_feature () {
let result = new_feature ();
assert_eq! ( result , expected_value );
}
Run Test to Verify Failure
just test-unit new_feature
Implement Feature
Write the minimum code needed to make the test pass.
Run Test to Verify Success
just test-unit new_feature
Run All Quality Checks
just format
just lint
just check --release
just test-unit
Testing Specific Components
Testing Store Changes
# Test store-related functionality
just test-unit store::
Testing Chain Adapters
# Test Ethereum chain adapter
just test-unit ethereum::
Testing GraphQL
GraphQL Tests
Schema Tests
# Test GraphQL query execution
just test-unit graphql::
Debugging Tests
View Test Output
# Show println! and dbg! output
just test-unit test_name -- --nocapture
# Show detailed test information
just test-unit test_name -- --show-output
Run Tests in Serial
# Run tests one at a time (useful for database tests)
just test-unit -- --test-threads=1
Debug with RUST_LOG
# Enable debug logging
RUST_LOG = debug just test-unit test_name
# Enable trace logging for specific module
RUST_LOG = graph:: store = trace just test-unit test_name
Common Test Issues
Database Connection Errors
If you see database connection errors, ensure:
PostgreSQL is running on the correct port
The database exists and has the required extensions
The THEGRAPH_STORE_POSTGRES_DIESEL_URL environment variable is set correctly
# Verify database connection
psql $THEGRAPH_STORE_POSTGRES_DIESEL_URL -c "SELECT 1;"
IPFS Connection Errors
If you see IPFS errors:
Ensure IPFS daemon is running: ipfs daemon
Check IPFS is accessible: curl http://localhost:5001/api/v0/version
Test Timeout Issues
# Increase test timeout
just test-unit test_name -- --test-threads=1 --timeout=300
Best Practices
Write Tests First
Use test-driven development: write tests before implementation.
Keep Tests Fast
Unit tests should be fast. Move slow tests to runner or integration tests.
Test Edge Cases
Test boundary conditions, error cases, and unusual inputs.
Use Descriptive Names
Test names should clearly describe what is being tested.
Clean Up Resources
Ensure tests clean up after themselves (database, files, etc.).
Verify Test Coverage
Use cargo tarpaulin or similar tools to check test coverage.
Resources