VIP内容

题目: A Survey on Dialog Management: Recent Advances and Challenges

摘要:

对话管理(DM)是面向任务的对话系统的一个重要组成部分。给定对话历史记录,DM预测对话状态并决定对话代理应该采取的下一步操作。近年来,对话策略学习被广泛地定义为一种强化学习(RL)问题,越来越多的研究集中在DM的适用性上。在本文中,综述了DM的三个关键主题的最新进展和挑战:

  • 提高模型可扩展性,方便对话系统在新场景下建模;
  • 处理对话策略学习的数据稀缺性问题;
  • 提高培训效率,实现更好的任务完成绩效。

相信这项调查可以为未来对话管理的研究提供一些启示。

成为VIP会员查看完整内容
0
43

最新内容

Communication is a cooperative effort that requires reaching mutual understanding among the participants. Humans use commonsense reasoning implicitly to produce natural and logically-coherent responses. As a step towards fluid human-AI communication, we study if response generation (RG) models can emulate human reasoning process and use common sense to help produce better-quality responses. We aim to tackle two research questions: how to formalize conversational common sense and how to examine RG models capability to use common sense? We first propose a task, CEDAR: Causal common sEnse in DiAlogue Response generation, that concretizes common sense as textual explanations for what might lead to the response and evaluates RG models behavior by comparing the modeling loss given a valid explanation with an invalid one. Then we introduce a process that automatically generates such explanations and ask humans to verify them. Finally, we design two probing settings for RG models targeting two reasoning capabilities using verified explanations. We find that RG models have a hard time determining the logical validity of explanations but can identify grammatical naturalness of the explanation easily.

0
0
下载
预览

最新论文

Communication is a cooperative effort that requires reaching mutual understanding among the participants. Humans use commonsense reasoning implicitly to produce natural and logically-coherent responses. As a step towards fluid human-AI communication, we study if response generation (RG) models can emulate human reasoning process and use common sense to help produce better-quality responses. We aim to tackle two research questions: how to formalize conversational common sense and how to examine RG models capability to use common sense? We first propose a task, CEDAR: Causal common sEnse in DiAlogue Response generation, that concretizes common sense as textual explanations for what might lead to the response and evaluates RG models behavior by comparing the modeling loss given a valid explanation with an invalid one. Then we introduce a process that automatically generates such explanations and ask humans to verify them. Finally, we design two probing settings for RG models targeting two reasoning capabilities using verified explanations. We find that RG models have a hard time determining the logical validity of explanations but can identify grammatical naturalness of the explanation easily.

0
0
下载
预览
父主题
Top