Overview
DynamicGraphNeuralNetwork extends static GNN architectures to handle temporal sequences of graphs. It includes built-in temporal aggregation methods (mean, max, GRU) and end-to-end classification.
Class Signature
Parameters
Raw node feature dimensionality (e.g., number of brain regions)
Hidden dimension for GNN layers after input projection
Output dimension of final GNN layer (before pooling)
Number of classification classes
Dropout probability
Whether to use TopK pooling layers
Ratio of nodes to retain in TopK pooling
GNN layer type:
"GCN", "GAT", or "GraphSAGE"Temporal aggregation method:
"mean", "max", or "gru"Number of GNN layers
Activation function:
"relu", "leaky_relu", "elu", or "gelu"Forward Method (Single Graph)
TemporalDataLoader.
Parameters
Node features of shape
[num_nodes, input_dim]Edge indices of shape
[2, num_edges]Batch assignment for nodes
Optional time features (not used in this encoder)
Returns
Graph embeddings of shape
[batch_size, 2 * output_dim]Forward Sequence Method
Parameters
List of node feature tensors, each of shape
[num_nodes_t, input_dim]List of edge index tensors per time step
List of batch tensors per time step
Returns
Classification logits of shape
[batch_size, num_classes]Architecture Details
Spatial Processing
- Input projection:
input_dim → hidden_dim - GNN layers with GraphNorm
- Optional TopK pooling after each layer
- Global mean + max pooling →
2 * output_dim
Temporal Aggregation
Mean aggregation: Average embeddings across time stepsClassification
Final linear layer:temporal_dim → num_classes
Methods
load_state_dict_flexible
State dictionary to load
Notes
- The
forward()method processes single graphs, whileforward_sequence()handles temporal sequences - TopK pooling ratio clamped to minimum 30% for stability
- When using GRU aggregation, hidden size equals
2 * output_dim - Compatible with both single-graph and multi-graph temporal workflows