Recent research on conversational search highlights the importance of mixed-initiative in conversations. To enable mixed-initiative, the system should be able to ask clarifying questions to the user. However, the ability of the underlying ranking models (which support conversational search) to account for these clarifying questions and answers has not been analysed when ranking documents, at large. To this end, we analyse the performance of a lexical ranking model on a conversational search dataset with clarifying questions. We investigate, both quantitatively and qualitatively, how different aspects of clarifying questions and user answers affect the quality of ranking. We argue that there needs to be some fine-grained treatment of the entire conversational round of clarification, based on the explicit feedback which is present in such mixed-initiative settings. Informed by our findings, we introduce a simple heuristic-based lexical baseline, that significantly outperforms the existing naive baselines. Our work aims to enhance our understanding of the challenges present in this particular task and inform the design of more appropriate conversational ranking models.
翻译:最近关于谈话搜索的研究突出了对话中混合倡议的重要性。为了能够混合倡议,该系统应该能够向用户提出澄清问题。然而,在总体排列文件时,并没有分析基础排名模型(支持对话搜索)对这些澄清问题和答案作出说明的能力。为此,我们分析对话搜索数据集词汇排序模型的性能,并澄清问题。我们从数量和质量上调查澄清问题和用户回答的不同方面如何影响排名的质量。我们主张,根据这种混合倡议环境中存在的明确反馈,对整个对话回合的澄清需要某种精细的处理。我们根据我们的调查结果,采用了简单的基于超自然的词汇基线,大大超越了现有的天真基线。我们的工作旨在增进我们对这一特定任务中存在的挑战的理解,并为设计更适当的对话排名模型提供信息。