Skip to main content
Graph Node has a comprehensive test suite covering unit tests, runner tests (integration-style tests), and full integration tests. This guide explains how to run each type of test and when to use them.
Use unit tests for regular development. Only run integration tests when explicitly needed or when making changes to integration/end-to-end functionality.

Test Types Overview

Graph Node uses three types of tests:
  1. Unit Tests: Fast, focused tests inlined with source code
  2. Runner Tests: Medium-speed integration-style tests for subgraph execution
  3. Integration Tests: Full end-to-end tests with real services

Unit Tests

Unit tests are inlined with the source code and test individual functions and modules in isolation.

Prerequisites

1

Start PostgreSQL

PostgreSQL must be running on localhost:5432 with an initialized graph-test database.Using Process Compose (recommended):
nix run .#unit
Or manually:
psql -U postgres <<EOF
create user graph with password 'graph';
create database "graph-test" with owner=graph template=template0 encoding='UTF8' locale='C';
create extension pg_trgm;
create extension btree_gist;
create extension postgres_fdw;
EOF
2

Start IPFS

IPFS must be running on localhost:5001.
ipfs daemon
3

Install Additional Tools

Install PNPM and Foundry:
# PNPM
npm install -g pnpm

# Foundry (for smart contract compilation)
curl -L https://foundry.paradigm.xyz | bash
foundryup
4

Set Environment Variable

export THEGRAPH_STORE_POSTGRES_DIESEL_URL=postgresql://graph:graph@127.0.0.1:5432/graph-test

Running Unit Tests

# Run all unit tests
just test-unit
Test Verification Requirement: When filtering for specific tests, ensure the intended test name(s) appear in the output. Cargo can exit successfully even when no tests matched your filter.

Unit Test Best Practices

  • Write unit tests for all new functions and modules
  • Keep tests focused on a single behavior
  • Use descriptive test names that explain what is being tested
  • Mock external dependencies when possible
  • Tests should be fast (< 1 second each)

Runner Tests

Runner tests are integration-style tests that test subgraph execution with real services but in a controlled environment.

Prerequisites

Runner tests use the same prerequisites as unit tests:
  1. PostgreSQL running on localhost:5432 (with initialized graph-test database)
  2. IPFS running on localhost:5001
  3. PNPM installed
  4. Foundry installed
  5. Environment variable THEGRAPH_STORE_POSTGRES_DIESEL_URL set
Runner tests use the same Nix services stack as unit tests:
nix run .#unit

Running Runner Tests

# Run all runner tests
just test-runner

Runner Test Characteristics

  • Take moderate time (10-20 seconds)
  • Automatically reset the database between runs
  • Some tests can pass without IPFS, but tests involving file data sources require it
  • Test real subgraph execution with compiled WASM
Test Verification Requirement: When filtering for specific tests, ensure the intended test name(s) appear in the output.

Integration Tests

Only run integration tests when explicitly needed:
  • Making changes to integration/end-to-end functionality
  • Debugging issues requiring full system testing
  • Preparing releases or major changes
Integration tests take several minutes to complete.
Integration tests run Graph Node with real blockchain nodes and test the complete indexing pipeline.

Prerequisites

1

Start PostgreSQL

PostgreSQL must be running on localhost:3011 with an initialized graph-node database.Using Process Compose (recommended):
nix run .#integration
2

Start IPFS

IPFS must be running on localhost:3001.Included in Process Compose setup above.
3

Start Anvil

Anvil (Ethereum test chain) must be running on localhost:3021.Included in Process Compose setup above.
4

Install Tools

Install PNPM and Foundry as described in the unit tests section.

Running Integration Tests

# Run all integration tests
# Automatically builds graph-node and gnd
just test-integration

Integration Test Verification

Critical Verification Requirements:
  • ALWAYS verify tests actually ran: Check the output for “test result: ok. X passed” where X > 0
  • If output shows “0 passed” or “0 tests run”: The TEST_CASE variable or filter was wrong - fix and re-run
  • Never trust exit code 0 alone: Cargo can exit successfully even when no tests matched your filter

Integration Test Logs

Logs are written to tests/integration-tests/graph-node.log for debugging:
# View logs during test run
tail -f tests/integration-tests/graph-node.log

# Search for errors
grep ERROR tests/integration-tests/graph-node.log

Service Configuration

Port Mapping

ServiceUnit Tests PortIntegration Tests PortDatabase/Config
PostgreSQL54323011graph-test / graph-node
IPFS50013001Data in ./.data/unit or ./.data/integration
Anvil (Ethereum)-3021Deterministic test chain

Process Compose Services

The repository includes Process Compose configurations for managing test services:
# Start PostgreSQL + IPFS for unit/runner tests
nix run .#unit

Code Quality Checks

Mandatory before ANY commit:
  • cargo fmt --all MUST be run
  • just lint MUST show zero warnings
  • cargo check --release MUST complete successfully
  • Unit test suite MUST pass

Running Quality Checks

1

Format Code

# 🚨 MANDATORY: Format all code after any .rs file edit
just format
2

Lint Code

# 🚨 MANDATORY: Check for warnings and errors
just lint
This must show zero warnings before committing.
3

Check Release Build

# 🚨 MANDATORY: Catch linking/optimization issues
just check --release
This catches issues that cargo check alone might miss.
4

Run Tests

# 🚨 MANDATORY: Ensure tests pass
just test-unit

Development Workflow

Continuous Testing During Development

Use cargo-watch to automatically run checks during development:
# Install cargo-watch
cargo install cargo-watch

# Run continuous testing
cargo watch \
    -x "fmt --all" \
    -x check \
    -x "test -- --test-threads=1" \
    -x "doc --no-deps"
This will continuously:
  1. Format all source files
  2. Check for compilation errors
  3. Run tests
  4. Generate documentation

Test-Driven Development

1

Write Test First

Write a failing test that describes the desired behavior:
#[test]
fn test_new_feature() {
    let result = new_feature();
    assert_eq!(result, expected_value);
}
2

Run Test to Verify Failure

just test-unit new_feature
3

Implement Feature

Write the minimum code needed to make the test pass.
4

Run Test to Verify Success

just test-unit new_feature
5

Run All Quality Checks

just format
just lint
just check --release
just test-unit

Testing Specific Components

Testing Store Changes

# Test store-related functionality
just test-unit store::

Testing Chain Adapters

# Test Ethereum chain adapter
just test-unit ethereum::

Testing GraphQL

# Test GraphQL query execution
just test-unit graphql::

Debugging Tests

View Test Output

# Show println! and dbg! output
just test-unit test_name -- --nocapture

# Show detailed test information
just test-unit test_name -- --show-output

Run Tests in Serial

# Run tests one at a time (useful for database tests)
just test-unit -- --test-threads=1

Debug with RUST_LOG

# Enable debug logging
RUST_LOG=debug just test-unit test_name

# Enable trace logging for specific module
RUST_LOG=graph::store=trace just test-unit test_name

Common Test Issues

Database Connection Errors

If you see database connection errors, ensure:
  • PostgreSQL is running on the correct port
  • The database exists and has the required extensions
  • The THEGRAPH_STORE_POSTGRES_DIESEL_URL environment variable is set correctly
# Verify database connection
psql $THEGRAPH_STORE_POSTGRES_DIESEL_URL -c "SELECT 1;"

IPFS Connection Errors

If you see IPFS errors:
  • Ensure IPFS daemon is running: ipfs daemon
  • Check IPFS is accessible: curl http://localhost:5001/api/v0/version

Test Timeout Issues

# Increase test timeout
just test-unit test_name -- --test-threads=1 --timeout=300

Best Practices

1

Write Tests First

Use test-driven development: write tests before implementation.
2

Keep Tests Fast

Unit tests should be fast. Move slow tests to runner or integration tests.
3

Test Edge Cases

Test boundary conditions, error cases, and unusual inputs.
4

Use Descriptive Names

Test names should clearly describe what is being tested.
5

Clean Up Resources

Ensure tests clean up after themselves (database, files, etc.).
6

Verify Test Coverage

Use cargo tarpaulin or similar tools to check test coverage.

Resources

Build docs developers (and LLMs) love