Federal agencies and researchers increasingly use large language models to analyze and simulate public opinion. When AI mediates between the public and policymakers, accuracy across intersecting identities becomes consequential; inaccurate group-level estimates can mislead outreach, consultation, and policy design. While research examines intersectionality in LLM outputs, no study has compared these outputs against real human responses across intersecting identities. Climate policy is one such domain, and this is particularly urgent for climate change, where opinion is contested and diverse. We investigate how LLMs represent intersectional patterns in U.S. climate opinions. We prompted six LLMs with profiles of 978 respondents from a nationally representative U.S. climate opinion survey and compared AI-generated responses to actual human answers across 20 questions. We find that LLMs appear to compress the diversity of American climate opinions, predicting less-concerned groups as more concerned and vice versa. This compression is intersectional: LLMs apply uniform gender assumptions that match reality for White and Hispanic Americans but misrepresent Black Americans, where actual gender patterns differ. These patterns, which may be invisible to standard auditing approaches, could undermine equitable climate governance.
翻译:联邦机构和研究人员越来越多地使用大型语言模型来分析和模拟公众意见。当人工智能介入公众与政策制定者之间时,跨交叉身份的准确性变得至关重要;不准确的群体层面估计可能误导公众参与、咨询和政策设计。尽管现有研究考察了LLM输出中的交叉性,但尚未有研究将这些输出与真实人类在交叉身份上的回答进行比较。气候政策正是这样一个领域,对于气候变化这一观点存在争议且多样化的议题尤为紧迫。我们研究了LLM如何表征美国气候观点中的交叉性模式。我们使用来自一项全国代表性美国气候意见调查的978名受访者画像提示了六个LLM,并在20个问题上将AI生成的回答与真实人类答案进行了比较。我们发现LLM似乎压缩了美国气候观点的多样性:将实际关切度较低的群体预测为更关切,反之亦然。这种压缩具有交叉性:LLM应用了统一的性别假设,这些假设与白人和西班牙裔美国人的实际情况相符,但却误读了黑人美国人——该群体的实际性别模式存在差异。这些可能被标准审计方法忽视的模式,可能会损害气候治理的公平性。