We present LrcSSM, a $\textit{non-linear}$ recurrent model that processes long sequences as fast as today's linear state-space layers. By forcing its Jacobian matrix to be diagonal, the full sequence can be solved in parallel, giving $\mathcal{O}(TD)$ computational work and memory and only $\mathcal{O}(\log T)$ sequential depth, for input-sequence length $T$ and a state dimension $D$. Moreover, LrcSSM offers a formal gradient-stability guarantee that other input-varying systems such as Liquid-S4 and Mamba do not provide. Importantly, the diagonal Jacobian structure of our model results in no performance loss compared to the original model with dense Jacobian, and the approach can be generalized to other non-linear recurrent models, demonstrating broader applicability. On a suite of long-range forecasting tasks, we demonstrate that LrcSSM outperforms Transformers, LRU, S5, and Mamba.
翻译:本文提出LrcSSM,一种非线性循环模型,其处理长序列的速度与当前线性状态空间层相当。通过强制其雅可比矩阵为对角矩阵,整个序列可以并行求解,计算量和内存需求为$\mathcal{O}(TD)$,顺序深度仅为$\mathcal{O}(\log T)$,其中$T$为输入序列长度,$D$为状态维度。此外,LrcSSM提供了其他输入变化系统(如Liquid-S4和Mamba)所不具备的形式化梯度稳定性保证。重要的是,我们模型的对角雅可比结构相比原始密集雅可比模型未造成性能损失,且该方法可推广至其他非线性循环模型,展现出更广泛的适用性。在一系列长程预测任务中,我们证明LrcSSM在性能上优于Transformer、LRU、S5和Mamba模型。