Knowledge graphs (KGs) which consist of a large number of triples have become widespread recently, and many knowledge graph embedding (KGE) methods are proposed to embed entities and relations of a KG into continuous vector spaces. Such embedding methods aim at simplifying the operations of conducting various in-KG tasks (e.g., link prediction) and out-of-KG tasks (e.g., question answering), and can be viewed as general solutions for representing KGs. However, existing KGE methods are not applicable to inductive settings, where a model trained on source KGs will be tested on target KGs with entities unseen during model training. Existing works focusing on KGs in inductive settings can only solve the inductive relation prediction task and can not handle other out-of-KG tasks as general as KGE methods, since they don't produce embeddings for entities. In this paper, to achieve inductive knowledge graph embedding, we propose a model MorsE, which doesn't learn embeddings for entities, while learning transferable meta-knowledge that can be used to produce entity embeddings. Such meta-knowledge is modeled by entity-independent modules and learned by meta-learning. Experimental results show that our model significantly outperforms corresponding baselines for in-KG and out-of-KG tasks in inductive settings.
翻译:由大量三重任务组成的知识图形(KGs)最近变得十分广泛,许多知识图形嵌入(KGE)方法被提议将KG的实体和关系嵌入连续矢量空间。这种嵌入方法旨在简化执行各种KG任务(例如链接预测)和KG外任务(例如问答)的操作,可以被视为代表KGs的一般解决方案。然而,现有的KGe方法不适用于感化设置,在这个设置中,对源KGs进行训练的模型将在目标KGs上与在模式培训期间看不到的实体进行测试。在感化环境中以KGs为主的现有工作只能解决感化关系预测任务,不能像KGGE方法那样处理其他KG外任务,因为它们不为实体生成嵌入嵌入。在本文中,为了实现感化知识模型嵌入,我们建议一个MorsE,它不会为实体学习嵌入嵌入嵌入,同时学习可转移的元知识知识,而在模型中学习KFIK型模块,用来产生实体嵌入模型。通过模型展示模型,通过模型,通过模型显示模型,通过模型显示,在模型显示,在模型中,这种模型显示。