Formulating and answering logical queries is a standard communication interface for knowledge graphs (KGs). Alleviating the notorious incompleteness of real-world KGs, neural methods achieved impressive results in link prediction and complex query answering tasks by learning representations of entities, relations, and queries. Still, most existing query answering methods rely on transductive entity embeddings and cannot generalize to KGs containing new entities without retraining the entity embeddings. In this work, we study the inductive query answering task where inference is performed on a graph containing new entities with queries over both seen and unseen entities. To this end, we devise two mechanisms leveraging inductive node and relational structure representations powered by graph neural networks (GNNs). Experimentally, we show that inductive models are able to perform logical reasoning at inference time over unseen nodes generalizing to graphs up to 500% larger than training ones. Exploring the efficiency--effectiveness trade-off, we find the inductive relational structure representation method generally achieves higher performance, while the inductive node representation method is able to answer complex queries in the inference-only regime without any training on queries and scales to graphs of millions of nodes. Code is available at https://github.com/DeepGraphLearning/InductiveQE.
翻译:制定和回答逻辑查询是知识图表(KGs)的一个标准通信界面。通过学习实体、关系和查询的表达方式,可以证明真实世界KGs的臭名昭著的不完整性,神经方法在将预测和复杂的查询解答任务联系起来方面取得了令人印象深刻的成果。尽管如此,大多数现有的答答方法都依赖于传输实体嵌入,并且无法在不再培训实体嵌入的情况下将包含新实体的图像概括化为KGs。在这项工作中,我们研究了在包含新实体的图表中进行推导的感应答任务。为此,我们设计了两种机制,通过图形神经网络(GNNSs)的力量来利用感应节点和关系结构表达方式进行连接,取得了令人印象深刻的成果。实验性地显示,感应模型能够进行逻辑推理,在看不见的节点中,图形的概括程度比培训大500%。探索效率-有效性交易,我们发现感应关系结构表述方法一般能取得更高的业绩,而感应节点节点表达式代表方法在图形神经网络网络网络(Gentndedeal)网络查询中无法回答复杂的查询。