With the growing adoption of agent-based models in policy evaluation, a pressing question arises: Can such systems effectively simulate and analyze complex social scenarios to inform policy decisions? Addressing this challenge could significantly enhance the policy-making process, offering researchers and practitioners a systematic way to validate, explore, and refine policy outcomes. To advance this goal, we introduce PolicySimEval, the first benchmark designed to evaluate the capability of agent-based simulations in policy assessment tasks. PolicySimEval aims to reflect the real-world complexities faced by social scientists and policymakers. The benchmark is composed of three categories of evaluation tasks: (1) 20 comprehensive scenarios that replicate end-to-end policy modeling challenges, complete with annotated expert solutions; (2) 65 targeted sub-tasks that address specific aspects of agent-based simulation (e.g., agent behavior calibration); and (3) 200 auto-generated tasks to enable large-scale evaluation and method development. Experiments show that current state-of-the-art frameworks struggle to tackle these tasks effectively, with the highest-performing system achieving only 24.5\% coverage rate on comprehensive scenarios, 15.04\% on sub-tasks, and 14.5\% on auto-generated tasks. These results highlight the difficulty of the task and the gap between current capabilities and the requirements for real-world policy evaluation.
翻译:暂无翻译