Link prediction is an important and frequently studied task that contributes to an understanding of the structure of knowledge graphs (KGs) in statistical relational learning. Inspired by the success of graph convolutional networks (GCN) in modeling graph data, we propose a unified GCN framework, named TransGCN, to address this task, in which relation and entity embeddings are learned simultaneously. To handle heterogeneous relations in KGs, we introduce a novel way of representing heterogeneous neighborhood by introducing transformation assumptions on the relationship between the subject, the relation, and the object of a triple. Specifically, a relation is treated as a transformation operator transforming a head entity to a tail entity. Both translation assumption in TransE and rotation assumption in RotatE are explored in our framework. Additionally, instead of only learning entity embeddings in the convolution-based encoder while learning relation embeddings in the decoder as done by the state-of-art models, e.g., R-GCN, the TransGCN framework trains relation embeddings and entity embeddings simultaneously during the graph convolution operation, thus having fewer parameters compared with R-GCN. Experiments show that our models outperform the-state-of-arts methods on both FB15K-237 and WN18RR.
翻译:链接预测是一项重要且经常研究的任务,有助于在统计关系学习中了解知识图(KGs)的结构。受图形革命网络(GCN)在模型图表数据中的成功启发,我们提议了一个统一的GCN框架,名为TransGCN, 以应对这项任务,在这个框架中,既学习关系和实体嵌入,又同时学习关系和实体嵌入。为了处理KGs的多种关系,我们引入了一种新的方法,通过对主题、关系和三重对象之间的关系进行转型假设,代表各异社区。具体地说,将关系视为一个转换操作者,将一个总实体转换为尾端实体。在我们的框架内,探索TransE的翻译假设和RotateE的轮换假设。此外,我们不仅学习实体嵌入以革命为基础的编码器,而且学习将关系嵌入解码器,例如,R-GCN, TransGCN框架将关系和实体嵌入在图形革命操作中,因此与R-G-R37模型和F-G-RF-RF的模型相比,参数较少。