In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user's expectation. We frame Explainable AI Planning in the context of the plan negotiation problem, in which a succession of hypothetical planning problems are generated and solved. The object of the negotiation is for the user to understand and ultimately arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are contrastive, i.e. "why A rather than B?". We use the data from this study to construct a taxonomy of user questions that often arise during plan negotiation. We formally define our approach to plan negotiation through model restriction as an iterative process. This approach generates hypothetical problems and contrastive plans by restricting the model through constraints implied by user questions. We formally define model-based compilations in PDDL2.1 of each constraint derived from a user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework that employs iterative model restriction. We demonstrate its benefits in a second user study.
翻译:在自动化规划中,当拟议计划与用户的期望不匹配时,就需要作出解释。我们根据计划谈判问题来制定可解释的AI规划,由此产生并解决一系列假设的规划问题。谈判的目的是让用户理解并最终达成令人满意的计划。我们提出用户研究结果,表明当用户询问计划问题时,这些问题是对比性的,即“为什么是A而不是B?”我们利用本研究的数据来建立在计划谈判期间经常出现的用户问题的分类。我们正式界定了我们通过模式限制作为迭接过程来规划谈判的方法。通过用户问题所隐含的限制来限制模式产生了假设问题和对比性计划。我们正式界定了PDDDL2.1中的基于模型的汇编,分别来自分类中的用户问题,并从计算复杂性的角度对汇编进行了实证性评估。汇编是作为使用迭接模式限制的解释框架的一部分而执行的。我们在第二次用户研究中展示了它的好处。