Knowledge graph embedding aims to predict the missing relations between entities in knowledge graphs. Tensor-decomposition-based models, such as ComplEx, provide a good trade-off between efficiency and expressiveness, that is crucial because of the large size of real world knowledge graphs. The recent multi-partition embedding interaction (MEI) model subsumes these models by using the block term tensor format and provides a systematic solution for the trade-off. However, MEI has several drawbacks, some of which carried from its subsumed tensor-decomposition-based models. In this paper, we address these drawbacks and introduce the Multi-partition Embedding Interaction iMproved beyond block term format (MEIM) model, with independent core tensor for ensemble effects and soft orthogonality for max-rank mapping, in addition to multi-partition embedding. MEIM improves expressiveness while still being highly efficient, helping it to outperform strong baselines and achieve state-of-the-art results on difficult link prediction benchmarks using fairly small embedding sizes. The source code is released at https://github.com/tranhungnghiep/MEIM-KGE.
翻译:知识嵌入图旨在预测知识图中实体间缺失的关系。 光学分解基于模型的模型,如ComplEx,在效率和表达性之间提供了良好的平衡,由于真实世界知识图的庞大规模,这一点至关重要。 最近的多部分嵌入互动模型(MEI)通过使用块状数字格式为这些模型进行分解,并为交易提供一个系统的解决办法。 但是,MEI有一些缺点,其中一些缺点源自其包含的多元分解基于模型。 在本文中,我们用非常小的嵌入规模处理这些缺点,并采用多部分嵌入互动 iMproved 的多部分嵌入式互动 IMproved 格式(MEIM) 模型,该模型除了多部分嵌入外,还具有共同效应和最大映射软或软或多度核心的特性。 MEIM改进了表达性,但效率仍然很高,有助于它超越强的基线,并在使用相当小的嵌入规模的硬链接基准上实现州-艺术结果。 源代码在 http://Mginsgin/comm/smgregent上发布。