Knowledge graph embedding models have gained significant attention in AI research. Recent works have shown that the inclusion of background knowledge, such as logical rules, can improve the performance of embeddings in downstream machine learning tasks. However, so far, most existing models do not allow the inclusion of rules. We address the challenge of including rules and present a new neural based embedding model (LogicENN). We prove that LogicENN can learn every ground truth of encoded rules in a knowledge graph. To the best of our knowledge, this has not been proved so far for the neural based family of embedding models. Moreover, we derive formulae for the inclusion of various rules, including (anti-)symmetric, inverse, irreflexive and transitive, implication, composition, equivalence and negation. Our formulation allows to avoid grounding for implication and equivalence relations. Our experiments show that LogicENN outperforms the state-of-the-art models in link prediction.
翻译:在AI研究中,知识嵌入模型引起了人们的极大关注。最近的工作表明,纳入背景知识,例如逻辑规则,可以改善下游机器学习任务中的嵌入性能。然而,迄今为止,大多数现有模型都不允许纳入规则。我们应对了纳入规则的挑战,并提出了一个新的基于神经嵌入模型(LogicENN)。我们证明,LogicENN可以在知识图中了解编码规则的每一个地面真相。据我们所知,对于嵌入模型的神经系来说,这一点迄今没有得到证明。此外,我们为纳入各种规则,包括(反)对称、反向、不弹性和过渡性、暗示、构成、等同和否定,我们的设计可以避免隐含和等同关系的基础。我们的实验显示,LogicENN在连接预测中超越了最先进的模型。