Graph Node supports indexing data from multiple blockchain networks simultaneously. This guide covers configuration and best practices for multi-network deployments.
Overview
Multi-network support allows you to:
Index the same contract deployed across different networks
Aggregate data from multiple chains in a single subgraph
Run one Graph Node instance serving multiple networks
Scale across networks with different sharding strategies
Configuration
Basic Multi-Network Setup
Configure multiple chains in your Graph Node configuration file:
[ store ]
[ store . primary ]
connection = "postgresql://graph:password@localhost:5432/graph-node"
pool_size = 10
[ chains ]
ingestor = "block_ingestor_node"
[ chains . mainnet ]
shard = "primary"
protocol = "ethereum"
provider = [
{ label = "mainnet-1" , url = "https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY" , features = [ "archive" , "traces" ] },
{ label = "mainnet-2" , url = "https://mainnet.infura.io/v3/YOUR_KEY" , features = [ "archive" ] }
]
[ chains . polygon ]
shard = "primary"
protocol = "ethereum"
provider = [
{ label = "polygon-1" , url = "https://polygon-mainnet.g.alchemy.com/v2/YOUR_KEY" , features = [ "archive" , "traces" ] }
]
[ chains . arbitrum ]
shard = "primary"
protocol = "ethereum"
provider = [
{ label = "arbitrum-1" , url = "https://arb-mainnet.g.alchemy.com/v2/YOUR_KEY" , features = [ "archive" , "traces" ] }
]
[ chains . optimism ]
shard = "primary"
protocol = "ethereum"
provider = [
{ label = "optimism-1" , url = "https://opt-mainnet.g.alchemy.com/v2/YOUR_KEY" , features = [ "archive" ] }
]
[ deployment ]
[[ deployment . rule ]]
# Deploy to primary shard and index nodes
indexers = [ "index_node_1" , "index_node_2" ]
Chain Configuration Options
Specifies which database shard stores this chain’s data. Must reference a shard defined in [store].
The protocol type being indexed. Options: ethereum, near, cosmos, arweave, starknet.
The polling interval for the block ingestor in milliseconds.
The network name used by AMP for this chain. Set this when AMP uses a different name than graph-node. Example: amp = "ethereum-mainnet" on a chain named mainnet.
Provider Configuration
The name of the provider, which will appear in logs.
The URL for the provider (RPC, WebSocket, or IPC endpoint).
Array of features that the provider supports:
archive: Provider is an archive node with full historical state
traces: Provider supports debug_traceBlockByNumber for call tracing
no_eip1898: Provider doesn’t support EIP-1898
no_eip2718: Provider doesn’t return the type field in transaction receipts
compression/gzip, compression/brotli, compression/deflate: Provider supports request compression
HTTP headers to be added on every request. Example: headers = { Authorization = "Bearer token" }
The maximum number of subgraphs that can use this provider. Defaults to unlimited.
Transport type. Options: rpc, ws, ipc.
Subgraph Manifest for Multiple Networks
Single Network Deployment
specVersion : 0.0.8
schema :
file : ./schema.graphql
dataSources :
- kind : ethereum/contract
name : Token
network : mainnet
source :
address : "0x1234567890123456789012345678901234567890"
abi : ERC20
startBlock : 12345678
mapping :
kind : ethereum/events
apiVersion : 0.0.7
language : wasm/assemblyscript
entities :
- Transfer
abis :
- name : ERC20
file : ./abis/ERC20.json
eventHandlers :
- event : Transfer(indexed address,indexed address,uint256)
handler : handleTransfer
file : ./src/mapping.ts
Multi-Network Deployment
You can create separate manifests for each network:
specVersion : 0.0.8
schema :
file : ./schema.graphql
dataSources :
- kind : ethereum/contract
name : Token
network : mainnet
source :
address : "0x1234567890123456789012345678901234567890"
abi : ERC20
startBlock : 12345678
mapping :
kind : ethereum/events
apiVersion : 0.0.7
language : wasm/assemblyscript
entities :
- Transfer
abis :
- name : ERC20
file : ./abis/ERC20.json
eventHandlers :
- event : Transfer(indexed address,indexed address,uint256)
handler : handleTransfer
file : ./src/mapping.ts
specVersion : 0.0.8
schema :
file : ./schema.graphql
dataSources :
- kind : ethereum/contract
name : Token
network : polygon
source :
address : "0xabcdefabcdefabcdefabcdefabcdefabcdefabcd"
abi : ERC20
startBlock : 23456789
mapping :
kind : ethereum/events
apiVersion : 0.0.7
language : wasm/assemblyscript
entities :
- Transfer
abis :
- name : ERC20
file : ./abis/ERC20.json
eventHandlers :
- event : Transfer(indexed address,indexed address,uint256)
handler : handleTransfer
file : ./src/mapping.ts
specVersion : 0.0.8
schema :
file : ./schema.graphql
dataSources :
- kind : ethereum/contract
name : Token
network : arbitrum-one
source :
address : "0xfedcbafedcbafedcbafedcbafedcbafedcbafed"
abi : ERC20
startBlock : 34567890
mapping :
kind : ethereum/events
apiVersion : 0.0.7
language : wasm/assemblyscript
entities :
- Transfer
abis :
- name : ERC20
file : ./abis/ERC20.json
eventHandlers :
- event : Transfer(indexed address,indexed address,uint256)
handler : handleTransfer
file : ./src/mapping.ts
Using Mustache Templates
Use templating to manage network-specific values:
subgraph.template.yaml
networks.json
Deploy Script
specVersion : 0.0.8
schema :
file : ./schema.graphql
dataSources :
- kind : ethereum/contract
name : Token
network : {{ network }}
source :
address : "{{address}}"
abi : ERC20
startBlock : {{ startBlock }}
mapping :
kind : ethereum/events
apiVersion : 0.0.7
language : wasm/assemblyscript
entities :
- Transfer
abis :
- name : ERC20
file : ./abis/ERC20.json
eventHandlers :
- event : Transfer(indexed address,indexed address,uint256)
handler : handleTransfer
file : ./src/mapping.ts
Database Sharding for Multiple Networks
For large-scale deployments, distribute networks across multiple database shards:
config.toml with Sharding
[ store ]
# Primary shard for Ethereum mainnet
[ store . primary ]
connection = "postgresql://graph:password@mainnet-db:5432/graph-node"
pool_size = 20
# Shard for Layer 2 networks
[ store . layer2 ]
connection = "postgresql://graph:password@layer2-db:5432/graph-node"
pool_size = 15
# Shard for other networks
[ store . other ]
connection = "postgresql://graph:password@other-db:5432/graph-node"
pool_size = 10
[ chains ]
ingestor = "block_ingestor_node"
# Ethereum mainnet on primary shard
[ chains . mainnet ]
shard = "primary"
protocol = "ethereum"
provider = [
{ label = "mainnet-1" , url = "https://eth-mainnet.g.alchemy.com/v2/KEY" , features = [ "archive" , "traces" ] }
]
# Layer 2 networks on layer2 shard
[ chains . polygon ]
shard = "layer2"
protocol = "ethereum"
provider = [
{ label = "polygon-1" , url = "https://polygon-mainnet.g.alchemy.com/v2/KEY" , features = [ "archive" ] }
]
[ chains . arbitrum ]
shard = "layer2"
protocol = "ethereum"
provider = [
{ label = "arbitrum-1" , url = "https://arb-mainnet.g.alchemy.com/v2/KEY" , features = [ "archive" ] }
]
[ chains . optimism ]
shard = "layer2"
protocol = "ethereum"
provider = [
{ label = "optimism-1" , url = "https://opt-mainnet.g.alchemy.com/v2/KEY" , features = [ "archive" ] }
]
# Other networks on other shard
[ chains . bsc ]
shard = "other"
protocol = "ethereum"
provider = [
{ label = "bsc-1" , url = "https://bsc-dataseed.binance.org" , features = [] }
]
[ chains . avalanche ]
shard = "other"
protocol = "ethereum"
provider = [
{ label = "avalanche-1" , url = "https://api.avax.network/ext/bc/C/rpc" , features = [] }
]
[ deployment ]
# Network-specific deployment rules
[[ deployment . rule ]]
match = { network = "mainnet" }
shard = "primary"
indexers = [ "index_node_mainnet_1" , "index_node_mainnet_2" ]
[[ deployment . rule ]]
match = { network = [ "polygon" , "arbitrum" , "optimism" ] }
shard = "layer2"
indexers = [ "index_node_layer2_1" , "index_node_layer2_2" ]
[[ deployment . rule ]]
match = { network = [ "bsc" , "avalanche" ] }
shard = "other"
indexers = [ "index_node_other_1" ]
# Default rule
[[ deployment . rule ]]
indexers = [ "index_node_default" ]
Running Graph Node with Configuration
# Using Docker
docker run -it \
-v $( pwd ) /config.toml:/config.toml \
-p 8000:8000 \
-p 8020:8020 \
graphprotocol/graph-node:latest \
--config /config.toml
# From source
cargo run -p graph-node --release -- --config config.toml
When using a configuration file, you cannot use the command-line options --postgres-url, --postgres-secondary-hosts, or --postgres-host-weights.
Provider Management
Multiple Providers per Network
Configure redundant providers for reliability:
[ chains . mainnet ]
shard = "primary"
provider = [
# Primary provider with all features
{
label = "alchemy-mainnet" ,
url = "https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY" ,
features = [ "archive" , "traces" ],
headers = { " X-Custom-Header " = "value" }
},
# Backup provider
{
label = "infura-mainnet" ,
url = "https://mainnet.infura.io/v3/YOUR_KEY" ,
features = [ "archive" ]
},
# Local node
{
label = "local-mainnet" ,
url = "http://localhost:8545" ,
features = [ "archive" , "traces" ],
limit = 5 # Limit to 5 subgraphs
}
]
Provider Limits
Control how many subgraphs can use each provider:
[ chains . mainnet ]
shard = "primary"
provider = [
# Unlimited (default)
{ label = "primary" , url = "https://primary.com/rpc" , features = [ "archive" , "traces" ] },
# Limited by node name
{
label = "secondary" ,
url = "https://secondary.com/rpc" ,
features = [ "archive" ],
match = [
{ name = "index_node_1" , limit = 10 },
{ name = "index_node_2" , limit = 5 }
]
}
]
Network-Specific Mappings
Handle network-specific logic in your mappings:
import { dataSource } from '@graphprotocol/graph-ts'
export function handleTransfer ( event : Transfer ) : void {
let network = dataSource . network ()
// Network-specific handling
if ( network == 'mainnet' ) {
// Ethereum mainnet logic
handleMainnetTransfer ( event )
} else if ( network == 'polygon' ) {
// Polygon-specific logic
handlePolygonTransfer ( event )
} else if ( network == 'arbitrum-one' ) {
// Arbitrum-specific logic
handleArbitrumTransfer ( event )
}
}
function handleMainnetTransfer ( event : Transfer ) : void {
// Higher gas costs, different transaction patterns
let entity = new Transfer ( event . transaction . hash . toHex ())
entity . gasPrice = event . transaction . gasPrice
entity . save ()
}
function handlePolygonTransfer ( event : Transfer ) : void {
// Lower gas costs, higher throughput
let entity = new Transfer ( event . transaction . hash . toHex ())
entity . network = 'polygon'
entity . save ()
}
Monitoring Multiple Networks
Query indexing status across all networks:
Query All Networks
Check Status
query {
indexingStatuses {
subgraph
synced
health
chains {
network
chainHeadBlock {
number
}
latestBlock {
number
}
earliestBlock {
number
}
}
}
}
Best Practices
Configuration
Performance
Reliability
Maintenance
Separate shards by network load - Put high-volume networks on dedicated shards
Use multiple providers - Configure redundant RPC endpoints for each network
Set appropriate pool sizes - Allocate connection pools based on expected load
Use deployment rules - Route subgraphs to appropriate nodes and shards
Name chains consistently - Use standard network names that match graph-cli
Optimize startBlock per network - Different networks have different relevant start blocks
Use archive nodes - Required for historical queries on all networks
Enable tracing selectively - Only enable on providers/networks that support it
Monitor polling intervals - Adjust based on network block times
Scale resources per network - Allocate more resources to high-throughput networks
Test each network separately - Deploy and verify on one network before expanding
Handle network-specific quirks - Account for differences in block times, reorg frequency
Set appropriate reorg thresholds - Different networks have different finality times
Monitor provider health - Track RPC endpoint availability and latency
Use provider limits wisely - Prevent overloading shared infrastructure
Version control configurations - Track changes to network configurations
Document network-specific behavior - Note any special handling per network
Plan for network upgrades - Be aware of hard forks and protocol changes
Test provider failover - Verify backup providers work correctly
Monitor costs - Track RPC usage across all networks
Environment Variables for Multi-Network
Key environment variables when running multiple networks:
Maximum expected reorg size. May need to be set differently per network.
ETHEREUM_POLLING_INTERVAL
Polling interval in milliseconds. Consider network block times.
GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE
Maximum blocks to scan per request. Adjust based on network throughput.
GRAPH_ETHEREUM_TARGET_TRIGGERS_PER_BLOCK_RANGE
Target triggers per batch. Optimize based on event density.
Troubleshooting
Network Not Found
Provider Issues
Sync Delays
Deployment Routing
Problem: Error says network is not configuredSolutions:
Verify network name in subgraph.yaml matches config.toml
Check that the chain section exists: [chains.network-name]
Ensure Graph Node was restarted after config changes
Validate config file: graph-node --config config.toml config check
Problem: RPC provider errors or failuresSolutions:
Test provider URL directly with curl
Verify API keys are valid and not rate-limited
Check provider features match requirements (archive, traces)
Try different transport (rpc vs ws)
Review provider logs for specific errors
Problem: One network syncing slower than othersSolutions:
Check if provider is rate-limiting
Verify provider has archive capability
Reduce GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE for that network
Allocate more database connections via pool_size
Consider moving to dedicated shard
Problem: Subgraph deployed to wrong shard or nodeSolutions:
Check deployment rules match subgraph name/network
Verify rule ordering (first match wins)
Test with: graphman config place myorg/subgraph network
Ensure indexer nodes are running with correct node IDs
Review [deployment] section in config file
Next Steps