Overview
Discretization functions convert continuous-time state space models to discrete-time representations. These functions are essential for implementing recurrent neural networks based on continuous-time dynamical systems.
DISCRETIZE_FNS
DISCRETIZE_FNS: dict[str, Callable]
Dictionary mapping discretization method names to their corresponding functions. Used internally by the LRNN base class to select the appropriate discretization method.
Available methods:
"zoh" → zoh - Zero-Order Hold
"bilinear" → bilinear - Bilinear transform
"dirac" → dirac - Dirac discretization
"async" → async_ - Asynchronous discretization
"no_discretization" → no_discretization - Identity operation
Discretization Functions
zoh
zoh(A, delta, integration_timesteps=None) -> tuple[Tensor, Tensor]
Zero-Order Hold (ZOH) discretization method, the most commonly used discretization across models.
Mathematical formulation:
Aˉ=exp(ΔA)
γˉ=A−1(Aˉ−I)
Reference: S4 Blog Post
The continuous-time state matrix.
The discretization step size.
Not used in ZOH discretization. Defaults to None.
The discretized system matrix.
bilinear
bilinear(A, delta, integration_timesteps=None) -> tuple[Tensor, Tensor]
Bilinear transform method first introduced in the S4 model.
Mathematical formulation:
Aˉ=(I+0.5ΔA)−1(I−0.5ΔA)
γˉ=(I+0.5ΔA)−1Δ
Reference: S4 Blog Post
Continuous-time system matrix, shape (N,). Only diagonal elements are used.
Time step for discretization.
Not used in bilinear discretization. Defaults to None.
The discretized system matrix.
dirac
dirac(A, delta, integration_timesteps=None) -> tuple[Tensor, float]
Dirac discretization method.
Mathematical formulation:
Aˉ=exp(ΔA)
γˉ=1.0
Reference: Event-SSM GitHub
Continuous-time system matrix.
Time step for discretization.
Not used in dirac discretization. Defaults to None.
The discretized system matrix.
The input normalizer, fixed at 1.0.
async_
async_(A, delta, integration_timesteps) -> tuple[Tensor, Tensor]
Asynchronous discretization method for event-driven models. Introduced in arxiv:2404.18508 to provide a strong inductive bias for asynchronous event streams.
Mathematical formulation:
Aˉ=exp(Δ⋅integration_timesteps⋅A)
γˉ=A−1(exp(ΔA)−I)
This method exists primarily for legacy reasons. It is not possible to use this method (or any discretization with async timesteps) with LTI models.
Continuous-time system matrix.
Time step for discretization.
Timesteps for async discretization, ideally of shape (B, L), representing the difference in timesteps between events.
The discretized system matrix.
no_discretization
no_discretization(A, delta, integration_timesteps=None) -> tuple[Tensor, float]
Identity operation that performs no discretization.
Mathematical formulation:
Aˉ=A
γˉ=1.0
Continuous-time system matrix (returned unchanged).
Time step for discretization (unused).
Not used in no_discretization. Defaults to None.
Fixed at 1.0, as B_bar = B.
Example Usage
import torch
from lrnnx.core.discretization import zoh, bilinear, DISCRETIZE_FNS
# Direct function call
A = torch.randn(64) # State matrix
delta = torch.tensor(0.01) # Time step
A_bar, gamma_bar = zoh(A, delta)
# Using the dictionary
discretize_fn = DISCRETIZE_FNS["bilinear"]
A_bar, gamma_bar = discretize_fn(A, delta)