Linear RNNs for PyTorch
A unified library providing state-of-the-art Linear RNN architectures including S4, S5, LRU, Mamba, and more — optimized for sequence modeling with custom CUDA kernels.
Quick start
Get up and running with lrnnx in minutes
Install the library
Import and instantiate a model
Explore by model type
Choose the architecture that fits your use case
Linear Time-Invariant (LTI)
Linear Time-Varying (LTV)
Language models
U-Net and classifiers
Key features
Everything you need for modern sequence modeling
10+ architectures
Unified implementations of S4, S4D, S5, LRU, Centaurus, Mamba, RG-LRU, S7, and more.
Custom CUDA kernels
Optimized forward and backward kernels for selective scan, simplified scan, and S4 operations.
Multiple API levels
Access scan operations, recurrent steps, or full layer implementations matching the original papers.
Fast inference
CUDA graphs-based autoregressive generation with 10x speedup over naive implementations.
Pre-built architectures
Language models, U-Nets, and hierarchical classifiers ready to use out of the box.
Research-backed
Accepted to EACL 2026 Student Research Workshop with comprehensive benchmarks and evaluation.
Resources
Learn more about lrnnx and Linear RNNs
Technical report
Read the EACL 2026 paper describing lrnnx’s architecture, benchmarks, and design decisions.
Read on arXivGitHub repository
View the source code, report issues, and contribute to the project.
View on GitHubTutorials
Step-by-step guides for audio denoising with U-Net and hierarchical classification.
Browse tutorialsCustom kernels
Learn about the optimized CUDA kernels that power lrnnx’s performance.
Explore kernelsReady to get started?
Install lrnnx and start building with state-of-the-art Linear RNN architectures. Check out the quickstart guide to run your first model in minutes.
View quickstart guide