pipeline module provides functions for computing Heat Kernel Signatures (HKS) on meshes. HKS is a shape descriptor that captures geometric features at multiple spatial scales, making it ideal for analyzing complex neural morphologies.
Overview
The HKS pipeline processes large neuron meshes through several stages:- Mesh simplification - Reduces vertex count while preserving shape
- Mesh splitting - Divides large meshes into overlapping chunks for efficient computation
- HKS computation - Calculates heat kernel signatures at multiple timescales
- Agglomeration - Groups vertices into local domains based on HKS similarity
- Feature aggregation - Computes summary statistics for each domain
Result Types
Result
A NamedTuple containing the full pipeline output.The simplified mesh as (vertices, faces).
Array mapping original mesh vertices to simplified mesh vertices. Length equals original vertex count.
Object containing information about mesh chunks and overlaps.
HKS features for each vertex in the simplified mesh.
Domain labels for each vertex in the simplified mesh.
Aggregated features for each domain.
Domain labels for each vertex in the original mesh.
Edge table for the domain graph.
Timing information for each pipeline stage.
CondensedHKSResult
A NamedTuple containing condensed pipeline output with domain-level features.The simplified mesh as (vertices, faces).
Array mapping original to simplified mesh vertices.
Mesh chunk information.
Domain labels for simplified mesh vertices.
Domain labels for original mesh vertices.
Condensed HKS features computed per-chunk then aggregated.
Node table with domain centroids and auxiliary features.
Edge table for the domain graph.
Timing information for each pipeline stage.
Functions
chunked_hks_pipeline
Compute HKS on a mesh using a chunked approach for memory efficiency.Mesh Parameters
Input mesh as (vertices, faces) tuple or object with
vertices and faces attributes.Vertex indices to compute HKS for. If None, computes for all vertices. Use this to focus computation on specific regions.
Simplification Parameters
Decimation aggressiveness (0-10). Higher values = faster but lower quality. 0 preserves geometry, 10 is fastest.
Fraction of triangles to remove. 0.7 means remove 70% of faces, keeping 30%.
Splitting Parameters
Geodesic distance (nm) for overlapping chunks. Larger values reduce distortion but increase computation.
Maximum vertices per chunk before splitting.
Minimum vertices for a chunk to be included. Filters small disconnected pieces.
Maximum neighbors when overlapping chunks. Overrides overlap_distance if reached first.
HKS Parameters
Number of HKS timescales. Determines feature dimensionality.
Minimum timescale for HKS. Captures fine geometric details.
Maximum timescale for HKS. Captures coarse geometric features.
Maximum eigenvalue for computation. Larger values = more detail but slower.
Use robust Laplacian for better handling of degenerate meshes. Recommended.
Mollification factor for robust Laplacian.
Truncate extra eigenpairs computed beyond max_eigenvalue.
Drop first eigenpair (proportional to vertex areas).
Data type for eigendecomposition. float64 is more accurate, float32 is faster.
Additional keyword arguments for HKS computation.
Agglomeration Parameters
Threshold for agglomerating vertices based on HKS feature distance.
Auxiliary Parameters
Nucleus coordinates (x, y, z). If provided, computes distance from each vertex to nucleus.
Whether to compute additional features like distance to nucleus and component features.
Execution Parameters
Number of parallel jobs. -1 uses all available CPUs.
Print progress information.
Result NamedTuple containing all pipeline outputs.
Example:
condensed_hks_pipeline
Compute condensed HKS features where agglomeration happens within each chunk before stitching.chunked_hks_pipeline. The key difference is that this function:
- Computes HKS and agglomeration per-chunk
- Stitches domain labels across chunks
- Returns condensed features computed within each chunk
CondensedHKSResult NamedTuple.
Example:
compute_condensed_hks
Lower-level function to compute HKS and agglomerate a single mesh without chunking.Input mesh as (vertices, faces).
chunked_hks_pipeline.
Returns: A tuple of (condensed_features, labels) where:
condensed_features: DataFrame of aggregated HKS features per domainlabels: Array of domain labels for each vertex
Pipeline Steps
The HKS pipeline implements the following steps:- Mesh Simplification - Uses fast-simplification to reduce mesh complexity
- Mesh Splitting - Spectral bisection recursively splits the mesh until all chunks are below the vertex threshold, then grows overlaps
-
HKS Computation - Implements the Heat Kernel Signature from Sun et al. (2008) using:
- Robust Laplacian from Crane et al. (2020)
- Band-by-band eigensolver from Vallet and Levy (2008)
- Agglomeration - Uses Ward’s method with connectivity constraints to group vertices into domains
- Feature Aggregation - Computes area-weighted mean features for each domain
Performance Tips
- Use
query_indicesto focus computation on regions of interest - Increase
simplify_target_reductionfor faster processing of very large meshes - Adjust
max_vertex_thresholdbased on available memory - Use
n_jobs=-1to leverage all CPU cores - Set
decomposition_dtype=np.float32for faster computation with slightly lower precision
References
- Sun et al. (2008) - Heat Kernel Signature
- Crane et al. (2020) - Robust Laplacian
- Vallet and Levy (2008) - Band-by-band eigensolver