Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work, we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful `explanations' for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do. Code for all experiments is available at https://github.com/successar/AttentionExplanation.
翻译:除了改进预测性能外,还经常被称作具有透明度:配备有注意的模型在全心投入单位中提供分布,而且往往(至少是隐含的)表示投入的相对重要性;然而,关注权重与模型产出之间存在何种关系尚不清楚。在这项工作中,我们对各种国家实验室任务进行了广泛的实验,目的是评估关注权重在多大程度上为预测提供了有意义的`规划'。我们发现,它们基本上没有。例如,所学的注意权重往往与基于梯度的特征重要性衡量标准不相干,人们可以找出非常不同的关注权重分布,但得出同等的预测。我们的调查结果显示,标准关注模块没有提供有意义的解释,不应当被看作它们做的。所有实验的代码都可在https://github.com/sucsocordar/AttentionExplaination查阅。