This paper studies aligning knowledge graphs from different sources or languages. Most existing methods train supervised methods for the alignment, which usually require a large number of aligned knowledge triplets. However, such a large number of aligned knowledge triplets may not be available or are expensive to obtain in many domains. Therefore, in this paper we propose to study aligning knowledge graphs in fully-unsupervised or weakly-supervised fashion, i.e., without or with only a few aligned triplets. We propose an unsupervised framework to align the entity and relation embddings of different knowledge graphs with an adversarial learning framework. Moreover, a regularization term which maximizes the mutual information between the embeddings of different knowledge graphs is used to mitigate the problem of mode collapse when learning the alignment functions. Such a framework can be further seamlessly integrated with existing supervised methods by utilizing a limited number of aligned triples as guidance. Experimental results on multiple datasets prove the effectiveness of our proposed approach in both the unsupervised and the weakly-supervised settings.
翻译:本文对来自不同来源或语言的知识图表进行了整理。 多数现有方法都为校正提供了监管方法, 通常需要大量的对齐知识三重体。 然而, 如此大量的对齐知识三重体可能没有可用, 或在许多领域很难获得。 因此, 在本文中, 我们提议以完全无人监管或微弱监管的方式, 即没有或只有少数对齐的三重体来研究对齐知识图表进行校正。 我们提议了一个未经监管的框架, 将实体和不同知识图表的组合与对抗性学习框架结合起来。 此外, 在学习校正校正功能时, 使用一个将不同知识图表嵌入的相互信息最大化的正规化术语来缓解模式崩溃的问题。 这样一个框架可以通过使用数量有限的对齐的三重体作为指导, 与现有的受监管方法进一步紧密结合。 多个数据集的实验结果证明了我们所提议的方法在未经监管和薄弱监管的环境中的有效性。