Commonsense question-answering (QA) methods combine the power of pre-trained Language Models (LM) with the reasoning provided by Knowledge Graphs (KG). A typical approach collects nodes relevant to the QA pair from a KG to form a Working Graph (WG) followed by reasoning using Graph Neural Networks(GNNs). This faces two major challenges: (i) it is difficult to capture all the information from the QA in the WG, and (ii) the WG contains some irrelevant nodes from the KG. To address these, we propose GrapeQA with two simple improvements on the WG: (i) Prominent Entities for Graph Augmentation identifies relevant text chunks from the QA pair and augments the WG with corresponding latent representations from the LM, and (ii) Context-Aware Node Pruning removes nodes that are less relevant to the QA pair. We evaluate our results on OpenBookQA, CommonsenseQA and MedQA-USMLE and see that GrapeQA shows consistent improvements over its LM + KG predecessor (QA-GNN in particular) and large improvements on OpenBookQA.
翻译:常识问答方法将预训练语言模型(LM)的功能与知识图(KG)提供的推理结合起来。典型的方法是从KG中收集与QA对相关的节点来形成工作图(WG),然后使用图神经网络(GNNs)进行推理。这面临着两个主要挑战:(i) WG中难以捕捉QA中的所有信息,(ii) WG包含来自KG的一些不相关节点。为了解决这些问题,我们提出了GrapeQA,针对WG进行两个简单的改进:(i)用于图增强的Prominent Entities,它通过识别QA对中的相关文本块并增强相关LM的潜在表示来扩展WG,(ii)上下文感知节点修剪,它删除与QA对不相关的节点。我们在OpenBookQA、CommonsenseQA和MedQA-USMLE上进行评估,发现GrapeQA在其LM + KG前身QA-GNN上显示出一致的改进,并在OpenBookQA上取得了很大的改进。