Skip to main content

Nodes

Nodes are the fundamental building blocks of Hypergraph computation graphs. Each node represents a unit of computation with defined inputs and outputs.

Node Decorators

@node

from hypergraph import node

@node(output_name="result")
def process(x: int) -> int:
    return x * 2
Decorator to wrap a function as a FunctionNode.
source
Callable | None
The function to wrap (when used without parentheses)
output_name
str | tuple[str, ...] | None
Name(s) for output value(s). If None, creates a side-effect only node with outputs = ()
rename_inputs
dict[str, str] | None
Mapping to rename inputs {old: new}
cache
bool
default:"False"
Whether to cache results. Requires a cache backend on the runner
hide
bool
default:"False"
Whether to hide from visualization
emit
str | tuple[str, ...] | None
Ordering-only output name(s). Auto-produced with sentinel value when node runs
wait_for
str | tuple[str, ...] | None
Ordering-only input name(s). Node won’t run until these values exist and are fresh
FunctionNode
Returns a FunctionNode instance that wraps the function
If the function has a return type annotation but no output_name is provided, a warning is emitted to help catch mistakes.

@interrupt

from hypergraph import interrupt

@interrupt(output_name="decision")
def approval(draft: str) -> str:
    return "auto-approved"  # returns value -> auto-resolve
    # return None          # returns None -> pause
Decorator to create a pause point for human-in-the-loop workflows.
output_name
str | tuple[str, ...]
required
Name(s) for output value(s). Required (defines where human responses are written)
rename_inputs
dict[str, str] | None
Mapping to rename inputs {old: new}
cache
bool
default:"False"
Whether to cache results
emit
str | tuple[str, ...] | None
Ordering-only output name(s)
wait_for
str | tuple[str, ...] | None
Ordering-only input name(s)
hide
bool
default:"False"
Whether to hide from visualization
InterruptNode
Returns an InterruptNode instance

Node Classes

HyperNode

Abstract base class for all node types with shared rename functionality.
name
str
Public node name
inputs
tuple[str, ...]
Input parameter names
outputs
tuple[str, ...]
Output value names

Properties

definition_hash
str
SHA256 hash of the node’s definition for caching and change detection
is_async
bool
Whether this node requires async execution (default: False)
is_generator
bool
Whether this node yields multiple values (default: False)
is_interrupt
bool
Whether this is a pause point for human-in-the-loop (default: False)
cache
bool
Whether results should be cached (default: False)
hide
bool
Whether this node is hidden from visualization (default: False)
wait_for
tuple[str, ...]
Ordering-only inputs this node waits for
data_outputs
tuple[str, ...]
Outputs that carry data (excludes emit-only outputs)

Methods

with_name
(name: str) -> HyperNode
Return new node with different name
with_inputs
(mapping: dict[str, str] | None = None, **kwargs: str) -> HyperNode
Return new node with renamed inputs
with_outputs
(mapping: dict[str, str] | None = None, **kwargs: str) -> HyperNode
Return new node with renamed outputs
has_default_for
(param: str) -> bool
Check if node has a fallback value for an input parameter
get_default_for
(param: str) -> Any
Get fallback value for an input parameter. Raises KeyError if none exists
get_input_type
(param: str) -> type | None
Get expected type for an input parameter from annotations
get_output_type
(output: str) -> type | None
Get type of an output value from annotations

FunctionNode

Wraps a Python function as a graph node. Created via @node decorator or FunctionNode() constructor.
from hypergraph import FunctionNode

def double(x: int) -> int:
    return x * 2

node = FunctionNode(double, output_name="doubled")
Supports all four execution modes:
  • Sync functions: def func(...)
  • Async functions: async def func(...)
  • Sync generators: def func(...): yield ...
  • Async generators: async def func(...): yield ...
source
Callable | FunctionNode
required
Function to wrap, or existing FunctionNode (extracts .func)
name
str | None
Public node name (default: func.__name__)
output_name
str | tuple[str, ...] | None
Name(s) for output value(s). If None, outputs = () (side-effect only node)
rename_inputs
dict[str, str] | None
Mapping to rename inputs {old: new}
cache
bool
default:"False"
Whether to cache results
hide
bool
default:"False"
Whether to hide from visualization
emit
str | tuple[str, ...] | None
Ordering-only output name(s)
wait_for
str | tuple[str, ...] | None
Ordering-only input name(s)

Properties

func
Callable
The wrapped function
is_async
bool
True if async def or async generator
is_generator
bool
True if yields multiple values
output_annotation
dict[str, Any]
Type annotations for output values. For single output: maps output_name to return type. For multiple outputs with tuple return: maps each output to corresponding tuple element type

GraphNode

Wraps a Graph for use as a node in another graph, enabling hierarchical composition.
from hypergraph import Graph

inner = Graph([...], name="preprocess")
outer = Graph([inner.as_node(), ...])
Create via Graph.as_node() rather than directly.
graph
Graph
required
The graph to wrap
name
str | None
Node name (default: use graph.name if set)

Properties

graph
Graph
The wrapped graph
definition_hash
str
Hash of the nested graph (delegates to graph.definition_hash)
is_async
bool
True if the nested graph contains any async nodes
map_config
tuple[list[str], Literal['zip', 'product'], ErrorHandling] | None
Map configuration if set via map_over(), else None

Methods

map_over
(*params: str, mode: Literal['zip', 'product'] = 'zip', error_handling: Literal['raise', 'continue'] = 'raise', clone: bool | list[str] = False) -> GraphNode
Configure this GraphNode for iteration over input parameters. When configured, the runner executes the inner graph multiple times, once for each combination of values. Outputs become lists of results.
  • params: Input parameter names to iterate over (should receive list values at runtime)
  • mode: “zip” for parallel iteration (default) or “product” for cartesian product
  • error_handling: “raise” (default) stops on first failure, “continue” collects partial results
  • clone: Control deep-copying of broadcast values per iteration. False (default) shares by reference, True deep-copies all, or list of param names to deep-copy selectively

InterruptNode

Pause point for human-in-the-loop workflows. Identical to FunctionNode except:
  • output_name is required (must define where responses go)
  • is_interrupt property returns True
  • Handler returning None pauses for human input
  • Handler returning a value auto-resolves
from hypergraph import InterruptNode

def approval(draft: str) -> str:
    return "auto-approved"  # auto-resolve
    # return None         # pause

node = InterruptNode(approval, output_name="decision")
source
Callable | FunctionNode
required
Function to wrap
name
str | None
Public node name (default: func.__name__)
output_name
str | tuple[str, ...]
required
Name(s) for output value(s). Required for interrupts
rename_inputs
dict[str, str] | None
Mapping to rename inputs {old: new}
cache
bool
default:"False"
Whether to cache results
hide
bool
default:"False"
Whether to hide from visualization
emit
str | tuple[str, ...] | None
Ordering-only output name(s)
wait_for
str | tuple[str, ...] | None
Ordering-only input name(s)

Build docs developers (and LLMs) love