Knowledge graph embedding (KGE) models learn to project symbolic entities and relations into a continuous vector space based on the observed triplets. However, existing KGE models cannot make a proper trade-off between the graph context and the model complexity, which makes them still far from satisfactory. In this paper, we propose a lightweight framework named LightCAKE for context-aware KGE. LightCAKE explicitly models the graph context without introducing redundant trainable parameters, and uses an iterative aggregation strategy to integrate the context information into the entity/relation embeddings. As a generic framework, it can be used with many simple KGE models to achieve excellent results. Finally, extensive experiments on public benchmarks demonstrate the efficiency and effectiveness of our framework.
翻译:知识图形嵌入模型(KGE)学会根据观察到的三胞胎将象征性实体和关系投射到连续矢量空间中,但是,现有的KGE模型无法在图形背景和模型复杂性之间作出适当的权衡,使得它们仍然远远不能令人满意。在本文件中,我们提议了一个称为LightCake的轻量级框架,用于背景认知KGE。LightCAake明确模拟图形背景,但不引入多余的可培训参数,并使用迭接组合战略将背景信息纳入实体/关系嵌入中。作为一个通用框架,它可以与许多简单的KGE模型一起使用,以取得优异的结果。最后,关于公共基准的广泛实验显示了我们框架的效率和效力。