A central feature of many deliberative processes, such as citizens' assemblies and deliberative polls, is the opportunity for participants to engage directly with experts. While participants are typically invited to propose questions for expert panels, only a limited number can be selected due to time constraints. This raises the challenge of how to choose a small set of questions that best represent the interests of all participants. We introduce an auditing framework for measuring the level of representation provided by a slate of questions, based on the social choice concept known as justified representation (JR). We present the first algorithms for auditing JR in the general utility setting, with our most efficient algorithm achieving a runtime of $O(mn\log n)$, where $n$ is the number of participants and $m$ is the number of proposed questions. We apply our auditing methods to historical deliberations, comparing the representativeness of (a) the actual questions posed to the expert panel (chosen by a moderator), (b) participants' questions chosen via integer linear programming, (c) summary questions generated by large language models (LLMs). Our results highlight both the promise and current limitations of LLMs in supporting deliberative processes. By integrating our methods into an online deliberation platform that has been used for over hundreds of deliberations across more than 50 countries, we make it easy for practitioners to audit and improve representation in future deliberations.
翻译:许多审议过程(如公民大会和审议式民意调查)的一个核心特征是参与者有机会直接与专家互动。虽然参与者通常被邀请向专家小组提出问题,但由于时间限制,只能选择有限数量的问题。这带来了如何选择一小部分最能代表所有参与者利益的问题的挑战。我们引入了一个审计框架,用于衡量一组问题所提供的代表性水平,该框架基于社会选择理论中称为合理代表(JR)的概念。我们提出了在一般效用设置下审计JR的首批算法,其中最有效的算法运行时间为$O(mn\log n)$,其中$n$是参与者数量,$m$是提议问题数量。我们将审计方法应用于历史审议案例,比较了以下三种问题的代表性:(a) 实际向专家小组提出的问题(由主持人选择),(b) 通过整数线性规划选择的参与者问题,(c) 由大语言模型(LLMs)生成的摘要问题。我们的结果突显了LLMs在支持审议过程中的潜力与当前局限性。通过将我们的方法集成到一个已在50多个国家、数百次审议中使用的在线审议平台,我们使实践者能够轻松审计并改进未来审议中的代表性。