Existing KG-augmented models for commonsense question answering primarily focus on designing elaborate Graph Neural Networks (GNNs) to model knowledge graphs (KGs). However, they ignore (i) the effectively fusing and reasoning over question context representations and the KG representations, and (ii) automatically selecting relevant nodes from the noisy KGs during reasoning. In this paper, we propose a novel model, JointLK, which solves the above limitations through the joint reasoning of LM and GNN and the dynamic KGs pruning mechanism. Specifically, JointLK performs joint reasoning between LM and GNN through a novel dense bidirectional attention module, in which each question token attends on KG nodes and each KG node attends on question tokens, and the two modal representations fuse and update mutually by multi-step interactions. Then, the dynamic pruning module uses the attention weights generated by joint reasoning to prune irrelevant KG nodes recursively. We evaluate JointLK on the CommonsenseQA and OpenBookQA datasets, and demonstrate its improvements to the existing LM and LM+KG models, as well as its capability to perform interpretable reasoning.
翻译:常见问题的现有 KG 推荐模式主要侧重于设计完善的图形神经网络(GNN) 以模拟知识图形(KGs) 。 但是,它们忽略了(一) 问题背景表达和KG代表的有效引信和推理,以及(二) 在推理过程中自动从吵闹的KG中选择相关节点。在本文件中,我们提出了一个新颖模式,即UTLK,它通过LM和GNNN的联合推理和动态 KGs 启动机制来解决上述限制。具体地说,联合LK通过一个新型的密集双向关注模块在LM和GNN之间进行联合推理,其中每个问题都出现在 KG 节点上,每个KG节点在问题符号上,两个模式表达结合,并通过多步互动相互更新。然后,动态调整模块利用联合推理产生的注意权重,将引向不相关的 KGndes 。我们评估了共同关系和 OpenQA 和 OpenBQA 数据集的联合LK, 显示其改进能力,作为现有LM+ 的推理模型。