Graph neural networks have demonstrated state-of-the-art performance on knowledge graph tasks such as link prediction. However, interpreting GNN predictions remains a challenging open problem. While many GNN explainability methods have been proposed for node or graph-level tasks, approaches for generating explanations for link predictions in heterogeneous settings are limited. In this paper, we propose RAW-Explainer, a novel framework designed to generate connected, concise, and thus interpretable subgraph explanations for link prediction. Our method leverages the heterogeneous information in knowledge graphs to identify connected subgraphs that serve as patterns of factual explanation via a random walk objective. Unlike existing methods tailored to knowledge graphs, our approach employs a neural network to parameterize the explanation generation process, which significantly speeds up the production of collective explanations. Furthermore, RAW-Explainer is designed to overcome the distribution shift issue when evaluating the quality of an explanatory subgraph which is orders of magnitude smaller than the full graph, by proposing a robust evaluator that generalizes to the subgraph distribution. Extensive quantitative results on real-world knowledge graph datasets demonstrate that our approach strikes a balance between explanation quality and computational efficiency.
翻译:图神经网络在知识图谱任务(如链接预测)上已展现出最先进的性能。然而,解释图神经网络的预测仍然是一个具有挑战性的开放性问题。尽管已有许多针对节点或图级任务的图神经网络可解释性方法被提出,但在异构环境下为链接预测生成解释的方法仍然有限。本文提出RAW-Explainer,一种新颖的框架,旨在为链接预测生成连通、简洁且因此可解释的子图解释。我们的方法利用知识图谱中的异构信息,通过随机游走目标识别连通子图,将其作为事实解释的模式。与现有针对知识图谱定制的方法不同,我们的方法采用神经网络对解释生成过程进行参数化,这显著加快了集体解释的生成速度。此外,RAW-Explainer旨在克服在评估解释性子图质量时出现的分布偏移问题——该子图比完整图小几个数量级——其方法是提出一个能够泛化到子图分布的鲁棒评估器。在真实世界知识图谱数据集上的大量定量结果表明,我们的方法在解释质量与计算效率之间取得了良好平衡。