This guide covers the complete workflow for deploying subgraphs to a Graph Node instance, from development to production.
Prerequisites
Before deploying a subgraph, ensure you have:
- A running Graph Node instance
- PostgreSQL database configured and accessible
- IPFS node running and accessible
- An Ethereum node or provider endpoint
- Graph CLI or GND installed
Installation
npm install -g @graphprotocol/graph-cli
# or
yarn global add @graphprotocol/graph-cli
cargo install --git https://github.com/graphprotocol/graph-node gnd
Setting Up Graph Node
Using Docker Compose
The quickest way to get started is using Docker Compose:
# Clone the repository
git clone https://github.com/graphprotocol/graph-node
cd graph-node/docker
# Start all services
docker-compose up
This starts:
- Graph Node on
http://localhost:8000 (GraphQL HTTP)
- IPFS on
http://localhost:5001
- PostgreSQL on
localhost:5432
- Admin API on
http://localhost:8020
Running from Source
# Install dependencies
sudo apt-get install -y postgresql postgresql-contrib libpq-dev
# Create database
psql -U postgres <<EOF
create user graph with password 'password';
create database "graph-node" with owner=graph template=template0 encoding='UTF8' locale='C';
\c graph-node
create extension pg_trgm;
create extension btree_gist;
create extension postgres_fdw;
grant usage on foreign data wrapper postgres_fdw to graph;
EOF
# Set environment variable
export POSTGRES_URL=postgresql://graph:password@localhost:5432/graph-node
# Build and run
export GRAPH_LOG=debug
cargo run -p graph-node --release -- \
--postgres-url $POSTGRES_URL \
--ethereum-rpc mainnet:archive,traces:https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY \
--ipfs 127.0.0.1:5001
Using Existing Services
If you already have IPFS and PostgreSQL running:
docker run -it \
-e postgres_host=<HOST> \
-e postgres_port=<PORT> \
-e postgres_user=<USER> \
-e postgres_pass=<PASSWORD> \
-e postgres_db=<DBNAME> \
-e ipfs=<HOST>:<PORT> \
-e ethereum=<NETWORK_NAME>:<ETHEREUM_RPC_URL> \
graphprotocol/graph-node:latest
Creating a Subgraph
Initialize Project
# Initialize a new subgraph
graph init --product hosted-service myorg/my-subgraph
# Or from an existing contract
graph init \
--product hosted-service \
--from-contract 0x1234567890123456789012345678901234567890 \
--network mainnet \
myorg/my-subgraph
This creates a project structure:
my-subgraph/
├── abis/
│ └── Contract.json
├── src/
│ └── mapping.ts
├── schema.graphql
├── subgraph.yaml
└── package.json
Define Schema
Edit schema.graphql to define your data model:
type User @entity {
id: ID!
address: Bytes!
transactions: [Transaction!]! @derivedFrom(field: "user")
totalVolume: BigInt!
createdAt: BigInt!
}
type Transaction @entity {
id: ID!
user: User!
amount: BigInt!
timestamp: BigInt!
blockNumber: BigInt!
}
Edit subgraph.yaml to configure your data sources:
specVersion: 0.0.8
schema:
file: ./schema.graphql
dataSources:
- kind: ethereum/contract
name: MyContract
network: mainnet
source:
address: "0x1234567890123456789012345678901234567890"
abi: MyContract
startBlock: 12345678
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
entities:
- User
- Transaction
abis:
- name: MyContract
file: ./abis/MyContract.json
eventHandlers:
- event: Transfer(indexed address,indexed address,uint256)
handler: handleTransfer
file: ./src/mapping.ts
Write Mappings
Implement event handlers in src/mapping.ts:
import { Transfer } from '../generated/MyContract/MyContract'
import { User, Transaction } from '../generated/schema'
import { BigInt } from '@graphprotocol/graph-ts'
export function handleTransfer(event: Transfer): void {
// Load or create user
let user = User.load(event.params.to.toHex())
if (!user) {
user = new User(event.params.to.toHex())
user.address = event.params.to
user.totalVolume = BigInt.fromI32(0)
user.createdAt = event.block.timestamp
}
user.totalVolume = user.totalVolume.plus(event.params.value)
user.save()
// Create transaction
let transaction = new Transaction(event.transaction.hash.toHex())
transaction.user = user.id
transaction.amount = event.params.value
transaction.timestamp = event.block.timestamp
transaction.blockNumber = event.block.number
transaction.save()
}
Building the Subgraph
Generate types and compile the subgraph:
# Install dependencies
npm install
# Generate types from schema and ABIs
graph codegen
# Build the subgraph
graph build
Generates AssemblyScript types from the GraphQL schema and contract ABIs.
Compiles the subgraph to WebAssembly and prepares it for deployment.
Deploying the Subgraph
Create Subgraph
First, create the subgraph on your Graph Node:
# Create the subgraph
graph create --node http://localhost:8020 myorg/my-subgraph
Deploy
Deploy your subgraph to the Graph Node:
# Deploy to local node
graph deploy --node http://localhost:8020 --ipfs http://localhost:5001 myorg/my-subgraph
# With version label
graph deploy --node http://localhost:8020 --ipfs http://localhost:5001 myorg/my-subgraph --version-label v0.1.0
The deployment process:
- Uploads files to IPFS
- Creates deployment with unique ID
- Starts indexing from
startBlock
- Makes subgraph available for querying
Querying the Subgraph
Once deployed, you can query your subgraph:
GraphQL Playground
HTTP Query
JavaScript
Navigate to http://localhost:8000/subgraphs/name/myorg/my-subgraph to access the GraphQL playground.
curl -X POST \
-H "Content-Type: application/json" \
-d '{"query": "{ users(first: 5) { id address totalVolume } }" }' \
http://localhost:8000/subgraphs/name/myorg/my-subgraph
const query = `
query {
users(first: 5, orderBy: totalVolume, orderDirection: desc) {
id
address
totalVolume
transactions(first: 10) {
amount
timestamp
}
}
}
`
const response = await fetch('http://localhost:8000/subgraphs/name/myorg/my-subgraph', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query })
})
const data = await response.json()
console.log(data.data.users)
Managing Deployments
Check Status
curl -X POST \
-H "Content-Type: application/json" \
-d '{"query": "{ indexingStatuses { subgraph synced health fatalError { message } chains { network latestBlock { number } chainHeadBlock { number } } } }" }' \
http://localhost:8000/graphql
Remove Subgraph
graph remove --node http://localhost:8020 myorg/my-subgraph
Reassign Subgraph
Move a subgraph to a different node:
graphman --config config.toml reassign myorg/my-subgraph node-id-2
Environment Variables
Key environment variables for deployment:
Control log levels. Options: error, warn, info, debug, trace
Maximum expected reorg size. Larger reorgs may cause inconsistent data.
ETHEREUM_POLLING_INTERVAL
How often to poll Ethereum for new blocks (in milliseconds).
GRAPH_ETHEREUM_TARGET_TRIGGERS_PER_BLOCK_RANGE
The ideal amount of triggers to be processed in a batch.
Timeout for IPFS requests in seconds.
GRAPH_GRAPHQL_QUERY_TIMEOUT
Maximum execution time for a GraphQL query in seconds. Default is unlimited.
GRAPH_KILL_IF_UNRESPONSIVE
If set, the process will be killed if unresponsive.
Troubleshooting
Connection Issues
Sync Issues
Deployment Failures
Query Errors
Problem: Graph Node can’t connect to EthereumSolutions:
- Verify RPC endpoint is accessible
- Check network name matches (e.g.,
mainnet not ethereum)
- Ensure provider has required capabilities (archive, traces)
- Test connection:
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' YOUR_RPC_URL
Problem: Subgraph not syncing or syncing slowlySolutions:
- Check
startBlock is set appropriately
- Verify
ETHEREUM_POLLING_INTERVAL isn’t too high
- Check logs for errors:
docker logs graph-node
- Ensure database has sufficient resources
- Monitor block processing: Query
indexingStatuses endpoint
Problem: Deployment fails or shows errorsSolutions:
- Run
graph build to check for compilation errors
- Verify all ABIs are valid JSON
- Check event signatures match contract exactly
- Ensure
specVersion matches features used
- Review mapping handler logic for runtime errors
Problem: GraphQL queries fail or timeoutSolutions:
- Simplify query to isolate issue
- Add pagination with
first and skip
- Check query complexity isn’t too high
- Set
GRAPH_GRAPHQL_MAX_COMPLEXITY if needed
- Review entity relationships for N+1 issues
Best Practices
Development
Performance
Production
- Test locally first - Always test with a local Graph Node before deploying to production
- Use version labels - Tag deployments with semantic versions for tracking
- Start small - Begin with a limited block range, then expand
- Monitor resources - Watch CPU, memory, and disk usage during indexing
- Handle errors gracefully - Use try-catch in mappings and check for null values
- Set appropriate startBlock - Don’t index from genesis if not needed
- Optimize queries - Avoid deeply nested queries and use pagination
- Use entity loading efficiently - Load entities once and reuse
- Batch operations - Group related operations together
- Monitor indexing speed - Track blocks per second and adjust resources
- Use archive nodes - Required for historical state queries
- Enable tracing - If using call handlers
- Set up monitoring - Track indexing status and query performance
- Plan for reorgs - Set appropriate
ETHEREUM_REORG_THRESHOLD
- Back up data - Regularly backup PostgreSQL database
- Use redundant providers - Configure multiple RPC endpoints
Next Steps