Recently, it has been argued that encoder-decoder models can be made more interpretable by replacing the softmax function in the attention with its sparse variants. In this work, we introduce a novel, simple method for achieving sparsity in attention: we replace the softmax activation with a ReLU, and show that sparsity naturally emerges from such a formulation. Training stability is achieved with layer normalization with either a specialized initialization or an additional gating function. Our model, which we call Rectified Linear Attention (ReLA), is easy to implement and more efficient than previously proposed sparse attention mechanisms. We apply ReLA to the Transformer and conduct experiments on five machine translation tasks. ReLA achieves translation performance comparable to several strong baselines, with training and decoding speed similar to that of the vanilla attention. Our analysis shows that ReLA delivers high sparsity rate and head diversity, and the induced cross attention achieves better accuracy with respect to source-target word alignment than recent sparsified softmax-based models. Intriguingly, ReLA heads also learn to attend to nothing (i.e. 'switch off') for some queries, which is not possible with sparsified softmax alternatives.
翻译:最近,有人争论说,通过以其稀少的变体来取代关注点中的软max函数,可以使编码器-解码器模型更容易解释。在这项工作中,我们引入了一种创新的简单方法,以引起注意的宽度:我们用RELU取代软式马克思激活,并表明从这种配方中自然会出现宽度。通过专门初始化或额外的加格功能,通过层级正常化实现培训稳定。我们称之为校正线性注意(ReLA)的模型更容易执行,比以前提议的微量注意机制更高效。我们把ReLA应用到变异器上,并对5个机器翻译任务进行实验。ReLA的翻译性能与几个强的基线相当,培训和解码速度与香草的注意相似。我们的分析表明,RELA提供较高的吸附率和头部多样性,而引人注意在源-目标字对齐方面比最近的软式调软式模型更加准确。我们不感知,ReLA头也学习不易变的软件。