In this work we describe our submission to the product ranking task of the Amazon KDD Cup 2022. We rely on a receipt that showed to be effective in previous competitions: we focus our efforts towards efficiently training and deploying large language odels, such as mT5, while reducing to a minimum the number of task-specific adaptations. Despite the simplicity of our approach, our best model was less than 0.004 nDCG@20 below the top submission. As the top 20 teams achieved an nDCG@20 close to .90, we argue that we need more difficult e-Commerce evaluation datasets to discriminate retrieval methods.
翻译:在这项工作中,我们描述我们提交亚马逊KDD Cup 2022的产品排名任务。我们依靠在以往的竞赛中显示有效的收据:我们集中努力,高效率地培训和部署大型语言的奥德尔,如mT5,同时将具体任务调整的数量减少到最低程度。尽管我们的方法简单,但我们的最佳模式低于提交最多的0.004 nDCG@20。由于前20个小组取得了接近.90的nDCG@20,我们主张我们需要更困难的电子商务评价数据集来区别检索方法。