实体对齐(Entity Alignment)也被称作实体匹配(Entity Matching),是指对于异构数据源知识库中的各个实体,找出属于现实世界中的同一实体。 实体对齐常用的方法是利用实体的属性信息判定不同源实体是否可进行对齐。

VIP内容

题目: AlignNet: Unsupervised Entity Alignment

摘要:

最近开发的深度学习模型能够学会在无监督的情况下将场景分割为组件对象。这开辟了许多新颖的研究方法,允许代理将对象(或实体)作为输入,而不是像素。不幸的是,尽管这些模型提供了单个帧的出色分割效果,但它们无法跟踪在一个时间步长处分割的对象与在后一个时间步长处对应的对象如何对齐(或对齐)。对齐(或对应)问题阻碍了在下游任务中使用对象表示的进展。在本文中,采取了解决对准问题的步骤,提出了AlignNet(无监督对准模块)。

成为VIP会员查看完整内容
0
14

最新内容

In this work, we take a closer look at the evaluation of two families of methods for enriching information from knowledge graphs: Link Prediction and Entity Alignment. In the current experimental setting, multiple different scores are employed to assess different aspects of model performance. We analyze the informativeness of these evaluation measures and identify several shortcomings. In particular, we demonstrate that all existing scores can hardly be used to compare results across different datasets. Moreover, we demonstrate that varying size of the test size automatically has impact on the performance of the same model based on commonly used metrics for the Entity Alignment task. We show that this leads to various problems in the interpretation of results, which may support misleading conclusions. Therefore, we propose adjustments to the evaluation and demonstrate empirically how this supports a fair, comparable, and interpretable assessment of model performance. Our code is available at https://github.com/mberr/rank-based-evaluation.

0
0
下载
预览

最新论文

In this work, we take a closer look at the evaluation of two families of methods for enriching information from knowledge graphs: Link Prediction and Entity Alignment. In the current experimental setting, multiple different scores are employed to assess different aspects of model performance. We analyze the informativeness of these evaluation measures and identify several shortcomings. In particular, we demonstrate that all existing scores can hardly be used to compare results across different datasets. Moreover, we demonstrate that varying size of the test size automatically has impact on the performance of the same model based on commonly used metrics for the Entity Alignment task. We show that this leads to various problems in the interpretation of results, which may support misleading conclusions. Therefore, we propose adjustments to the evaluation and demonstrate empirically how this supports a fair, comparable, and interpretable assessment of model performance. Our code is available at https://github.com/mberr/rank-based-evaluation.

0
0
下载
预览
父主题
Top