Leveraging biased click data for optimizing learning to rank systems has been a popular approach in information retrieval. Because click data is often noisy and biased, a variety of methods have been proposed to construct unbiased learning to rank (ULTR) algorithms for the learning of unbiased ranking models. Among them, automatic unbiased learning to rank (AutoULTR) algorithms that jointly learn user bias models (i.e., propensity models) with unbiased rankers have received a lot of attention due to their superior performance and low deployment cost in practice. Despite their differences in theories and algorithm design, existing studies on ULTR usually use uni-variate ranking functions to score each document or result independently. On the other hand, recent advances in context-aware learning-to-rank models have shown that multivariate scoring functions, which read multiple documents together and predict their ranking scores jointly, are more powerful than uni-variate ranking functions in ranking tasks with human-annotated relevance labels. Whether such superior performance would hold in ULTR with noisy data, however, is mostly unknown. In this paper, we investigate existing multivariate scoring functions and AutoULTR algorithms in theory and prove that permutation invariance is a crucial factor that determines whether a context-aware learning-to-rank model could be applied to existing AutoULTR framework. Our experiments with synthetic clicks on two large-scale benchmark datasets show that AutoULTR models with permutation-invariant multivariate scoring functions significantly outperform those with uni-variate scoring functions and permutation-variant multivariate scoring functions.


翻译:将偏差点击数据用于优化学习到排级系统,这一直是信息检索中流行的一种流行方法。由于点击数据往往噪音和偏差,因此建议了各种方法来构建无偏见的学习(LEGR)算法,以学习不偏倚的排名模式。其中,自动不偏差的学习(AutuultRR)算法,以共同学习用户偏差模式(即,倾向模型),而不带偏见的排级者则因其高性能和实际部署成本低而得到了极大的关注。尽管在理论和算法设计上存在差异,但关于LEGR的现有研究通常使用单变差排名函数来评分每个文档或独立结果。另一方面,最近对上下文(LEGR)算法算法模型的进展表明,在阅读多个文件并共同预测其排名分数的多变差性评法函数,在与具有人称相关性标签的排位任务中,比单的排异性排序功能要强得多。尽管在逻辑和运算法设计中存在差异数据,但在本文中,我们调查现有的多变数性评分级计算函数,而亚性评分法计算函数的比比重性标准框架中,这些比重的比重的比比重性比比重性比重性标准的比重性比重性标准比比重比重比重比重比重比重比重比重比重比重比重。

6
下载
关闭预览

相关内容

Keras François Chollet 《Deep Learning with Python 》, 386页pdf
专知会员服务
143+阅读 · 2019年10月12日
已删除
将门创投
11+阅读 · 2019年7月4日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
论文浅尝 | Improved Neural Relation Detection for KBQA
开放知识图谱
13+阅读 · 2018年1月21日
论文浅尝 | Reinforcement Learning for Relation Classification
开放知识图谱
9+阅读 · 2017年12月10日
Learning to Weight for Text Classification
Arxiv
8+阅读 · 2019年3月28日
VIP会员
相关VIP内容
Keras François Chollet 《Deep Learning with Python 》, 386页pdf
专知会员服务
143+阅读 · 2019年10月12日
相关资讯
已删除
将门创投
11+阅读 · 2019年7月4日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
论文浅尝 | Improved Neural Relation Detection for KBQA
开放知识图谱
13+阅读 · 2018年1月21日
论文浅尝 | Reinforcement Learning for Relation Classification
开放知识图谱
9+阅读 · 2017年12月10日
Top
微信扫码咨询专知VIP会员