Emotion Recognition in Conversation (ERC) plays a significant part in Human-Computer Interaction (HCI) systems since it can provide empathetic services. Multimodal ERC can mitigate the drawbacks of uni-modal approaches. Recently, Graph Neural Networks (GNNs) have been widely used in a variety of fields due to their superior performance in relation modeling. In multimodal ERC, GNNs are capable of extracting both long-distance contextual information and inter-modal interactive information. Unfortunately, since existing methods such as MMGCN directly fuse multiple modalities, redundant information may be generated and diverse information may be lost. In this work, we present a directed Graph based Cross-modal Feature Complementation (GraphCFC) module that can efficiently model contextual and interactive information. GraphCFC alleviates the problem of heterogeneity gap in multimodal fusion by utilizing multiple subspace extractors and Pair-wise Cross-modal Complementary (PairCC) strategy. We extract various types of edges from the constructed graph for encoding, thus enabling GNNs to extract crucial contextual and interactive information more accurately when performing message passing. Furthermore, we design a GNN structure called GAT-MLP, which can provide a new unified network framework for multimodal learning. The experimental results on two benchmark datasets show that our GraphCFC outperforms the state-of-the-art (SOTA) approaches.
翻译:在人机交互系统中,对话情感识别(ERC)对提供共情式服务有着重要作用。多模态ERC可以缓解单模态方法的缺点。最近,由于其卓越的关系建模性能,图神经网络(GNNs)已被广泛应用于各个领域。在多模态ERC中,GNNs能够提取长距离的上下文信息和跨模态的交互信息。不幸的是,由于像MMGCN这样的现有方法直接融合了多个形式,因此可能会生成冗余信息并且多样化信息可能会丢失。在本文中,我们提出了一种有向图跨模态特征补充(GraphCFC)模块,可有效地模拟上下文和交互信息。GraphCFC通过利用多个子空间提取器和成对交叉模态互补(PairCC)策略,减轻了多模态融合中异质性差距问题。我们从构建的图中提取各种类型的边进行编码,因此使得GNNs在执行信息传递时更准确地提取关键的上下文和交互信息。此外,我们设计了一种称为GAT-MLP的GNN结构,为多模态学习提供了新的统一网络框架。基于两个基准数据集的实验结果表明,我们的GraphCFC优于现有的最先进方法(SOTA)。