Effective governance of artificial intelligence (AI) requires public engagement, yet communication strategies centered on existential risk have not produced sustained mobilization. In this paper, we examine the psychological and opinion barriers that limit engagement with extinction narratives, such as mortality avoidance, exponential growth bias, and the absence of self-referential anchors. We contrast them with evidence that public concern over AI rises when framed in terms of proximate harms such as employment disruption, relational instability, and mental health issues. We validate these findings through actual message testing with 1063 respondents, with the evidence showing that AI risks to Jobs and Children have the highest potential to mobilize people, while Existential Risk is the lowest-performing theme across all demographics. Using survey data from five countries, we identify two segments (Tech-Positive Urbanites and World Guardians) as particularly receptive to such framing and more likely to participate in civic action. Finally, we argue that mobilization around these everyday concerns can raise the political salience of AI, creating "policy demand" for structural measures to mitigate AI risks. We conclude that this strategy creates the conditions for successful regulatory change.
翻译:人工智能(AI)的有效治理需要公众参与,然而围绕存在性风险的传播策略并未引发持续的动员。本文探讨了限制公众参与灭绝叙事(如死亡规避、指数增长偏见及缺乏自我参照锚点)的心理与观念障碍。我们对比了相关证据,表明当AI风险以就业中断、关系不稳定和心理健康问题等近期危害为框架时,公众关注度会上升。我们通过对1063名受访者的实际信息测试验证了这些发现,证据表明AI对就业和儿童的风险最具动员潜力,而存在性风险在所有人口统计维度中表现最弱。基于五个国家的调查数据,我们识别出两个群体(科技乐观型都市居民与世界守护者)对此类框架尤为敏感,且更可能参与公民行动。最后,我们认为围绕这些日常关切的动员能够提升AI的政治显著性,从而为缓解AI风险的结构性措施创造“政策需求”。我们得出结论:该策略为成功的监管变革创造了条件。