The goal of knowledge representation learning is to embed entities and relations into a low-dimensional, continuous vector space. How to push a model to its limit and obtain better results is of great significance in knowledge graph's applications. We propose a simple and elegant method, Trans-DLR, whose main idea is dynamic learning rate control during training. Our method achieves remarkable improvement, compared with recent GAN-based method. Moreover, we introduce a new negative sampling trick which corrupts not only entities, but also relations, in different probabilities. We also develop an efficient way, which fully utilizes multiprocessing and parallel computing, to speed up evaluation of the model in link prediction tasks. Experiments show that our method is effective.
翻译:知识代表学习的目标是将实体和关系嵌入一个低维的连续矢量空间。 如何将模型推向极限并取得更好的结果对于知识图的应用具有重大意义。 我们提出了一个简单而优雅的方法,即Trans-DLR,其主要想法是培训期间动态学习率控制。 我们的方法与最近的GAN方法相比取得了显著的改进。 此外,我们引入了新的负面抽样技巧,不仅腐蚀实体,而且腐蚀不同概率的关系。我们还开发了一种高效的方法,充分利用多处理和平行计算,以加速对模型的评估,将预测任务联系起来。实验表明我们的方法是有效的。