Pre-trained language models have achieved great success in various large-scale information retrieval tasks. However, most of pretraining tasks are based on counterfeit retrieval data where the query produced by the tailored rule is assumed as the user's issued query on the given document or passage. Therefore, we explore to use large-scale click logs to pretrain a language model instead of replying on the simulated queries. Specifically, we propose to use user behavior features to pretrain a debiased language model for document ranking. Extensive experiments on Baidu desensitization click logs validate the effectiveness of our method. Our team on WSDM Cup 2023 Pre-training for Web Search won the 1st place with a Discounted Cumulative Gain @ 10 (DCG@10) score of 12.16525 on the final leaderboard.
翻译:培训前语言模式在各种大规模信息检索任务中取得了巨大成功,然而,大多数培训前任务都以伪造的检索数据为基础,而根据定制规则提供的查询假定是用户就特定文件或段落发出的查询。因此,我们探索使用大型点击记录来预先培训语言模式,而不是对模拟查询作出答复。具体地说,我们提议使用用户行为特征来预先培训文件排序的贬低语言模式。关于Baidu des environmental logs的广泛实验验证了我们的方法的有效性。我们的WSDM Cup 2023网络搜索预培训团队在最后首选板上赢得了12.165分的折扣累积收益@10分。</s>