在对话系统中，对话行为识别和情感分类是捕获对话者意图的两个相关任务，其中对话行为可以捕获显式的意图，情感可以表达隐性的意图。其中上下文信息(contextual information)和相互交互信息(mutual interaction information)是这两个相关任务的关键因素。但是，现有方法都无法同时考虑这两个重要的信息。为了解决这个问题，在本文中，我们提出了一个协同交互图注意力网络(Co-GAT)来联合建模这两个任务。核心模块是我们提出的协同交互图交互层，可以在统一的图网络中构建跨历史连接(cross-utterances connection)和跨任务连接(cross-tasks connection)。我们的模型在两个公开的数据集达到了SOTA性能。此外，我们发现上下文和相互交互信息的贡献与预训练模型并不完全重叠，在多种预训练模型上(BERT，RoBERTa，XLNet)均取得了性能提升。
Automated predictions require explanations to be interpretable by humans. One type of explanation is a rationale, i.e., a selection of input features such as relevant text snippets from which the model computes the outcome. However, a single overall selection does not provide a complete explanation, e.g., weighing several aspects for decisions. To this end, we present a novel self-interpretable model called ConRAT. Inspired by how human explanations for high-level decisions are often based on key concepts, ConRAT extracts a set of text snippets as concepts and infers which ones are described in the document. Then, it explains the outcome with a linear aggregation of concepts. Two regularizers drive ConRAT to build interpretable concepts. In addition, we propose two techniques to boost the rationale and predictive performance further. Experiments on both single- and multi-aspect sentiment classification tasks show that ConRAT is the first to generate concepts that align with human rationalization while using only the overall label. Further, it outperforms state-of-the-art methods trained on each aspect label independently.