Distilling pretrained softmax attention Transformers into more efficient hybrid architectures that interleave softmax and linear attention layers is a promising approach for improving the inference efficiency of LLMs without requiring expensive pretraining from scratch. A critical factor in the conversion process is layer selection, i.e., deciding on which layers to convert to linear attention variants. This paper describes a simple and efficient recipe for layer selection that uses layer importance scores derived from a small amount of training on generic text data. Once the layers have been selected we use a recent pipeline for the distillation process itself \citep[RADLADS;][]{goldstein2025radlads}, which consists of attention weight transfer, hidden state alignment, KL-based distribution matching, followed by a small amount of finetuning. We find that this approach is more effective than existing approaches for layer selection, including heuristics that uniformly interleave linear attentions based on a fixed ratio, as well as more involved approaches that rely on specialized diagnostic datasets.
翻译:将预训练的softmax注意力Transformer蒸馏为更高效的混合架构——这种架构交替使用softmax与线性注意力层——是提升大语言模型推理效率的一种可行路径,且无需从头进行昂贵的预训练。转换过程中的关键环节是层选择,即确定将哪些层转换为线性注意力变体。本文提出一种简洁高效的层选择方案,该方案利用在通用文本数据上进行少量训练所得的层重要性评分来确定待转换层。选定目标层后,我们采用近期提出的蒸馏流程(RADLADS \citep{goldstein2025radlads})执行蒸馏操作,该流程包含注意力权重迁移、隐藏状态对齐、基于KL散度的分布匹配以及后续的少量微调。实验表明,相较于现有层选择方法——包括基于固定比例均匀插入线性注意力的启发式策略,以及依赖专用诊断数据集的复杂方案——本方法具有更优的效果。