Knowledge graph embeddings rank among the most successful methods for link prediction in knowledge graphs, i.e., the task of completing an incomplete collection of relational facts. A downside of these models is their strong sensitivity to model hyperparameters, in particular regularizers, which have to be extensively tuned to reach good performance [Kadlec et al., 2017]. We propose an efficient method for large scale hyperparameter tuning by interpreting these models in a probabilistic framework. After a model augmentation that introduces per-entity hyperparameters, we use a variational expectation-maximization approach to tune thousands of such hyperparameters with minimal additional cost. Our approach is agnostic to details of the model and results in a new state of the art in link prediction on standard benchmark data.
翻译:知识图嵌入是知识图中最成功的连接预测方法之一,即完成不完全的关联事实收集工作的任务。这些模型的缺点是,它们对于模型超参数,特别是必须广泛调整才能达到良好性能的正规化器的高度敏感性[Kadlec等人,2017年]。我们提出了一个高效的大规模超参数调整方法,在概率框架内解释这些模型。在引入了每实体超参数的模型增强后,我们采用变式预期-最大化方法,以最低的额外费用调和数千个此类超参数。我们的方法对模型的细节和结果不知情,在标准基准数据的预测中以新的水平进行艺术预测。