We formulate long-context language modeling as a problem in continual learning rather than architecture design. Under this formulation, we only use a standard architecture -- a Transformer with sliding-window attention. However, our model continues learning at test time via next-token prediction on the given context, compressing the context it reads into its weights. In addition, we improve the model's initialization for learning at test time via meta-learning at training time. Overall, our method, a form of Test-Time Training (TTT), is End-to-End (E2E) both at test time (via next-token prediction) and training time (via meta-learning), in contrast to previous forms. We conduct extensive experiments with a focus on scaling properties. In particular, for 3B models trained with 164B tokens, our method (TTT-E2E) scales with context length in the same way as Transformer with full attention, while others, such as Mamba 2 and Gated DeltaNet, do not. However, similar to RNNs, TTT-E2E has constant inference latency regardless of context length, making it 2.7 times faster than full attention for 128K context. Our code is publicly available.
翻译:我们将长上下文语言建模重新定义为持续学习问题而非架构设计问题。在此框架下,我们仅采用标准架构——即具有滑动窗口注意力的Transformer。然而,该模型在测试时通过给定上下文的下一个词元预测持续学习,将其读取的上下文信息压缩至权重中。此外,我们通过在训练时进行元学习来改进模型在测试时学习的初始化状态。总体而言,我们的方法作为测试时训练的一种形式,在测试时(通过下一个词元预测)和训练时(通过元学习)均实现端到端处理,这与先前形式形成鲜明对比。我们进行了大量实验,重点关注缩放特性。具体而言,对于使用1640亿词元训练的30亿参数模型,我们的方法在上下文长度缩放方面表现出与完全注意力Transformer相同的特性,而其他方法(如Mamba 2和Gated DeltaNet)则不具备该特性。但类似于循环神经网络,该方法具有与上下文长度无关的恒定推理延迟,在处理128K上下文时比完全注意力机制快2.7倍。我们的代码已公开提供。