AI policymakers are responsible for delivering effective governance mechanisms that can provide safe, aligned and trustworthy AI development. However, the information environment offered to policymakers is characterised by an unnecessarily low Signal-To-Noise Ratio, favouring regulatory capture and creating deep uncertainty and divides on which risks should be prioritised from a governance perspective. We posit that the current publication speeds in AI combined with the lack of strong scientific standards, via weak reproducibility protocols, effectively erodes the power of policymakers to enact meaningful policy and governance protocols. Our paper outlines how AI research could adopt stricter reproducibility guidelines to assist governance endeavours and improve consensus on the AI risk landscape. We evaluate the forthcoming reproducibility crisis within AI research through the lens of crises in other scientific domains; providing a commentary on how adopting preregistration, increased statistical power and negative result publication reproducibility protocols can enable effective AI governance. While we maintain that AI governance must be reactive due to AI's significant societal implications we argue that policymakers and governments must consider reproducibility protocols as a core tool in the governance arsenal and demand higher standards for AI research. Code to replicate data and figures: https://github.com/IFMW01/reproducibility-the-new-frontier-in-ai-governance
翻译:人工智能政策制定者肩负着建立有效治理机制的责任,以确保人工智能发展安全、对齐且可信。然而,当前政策制定者所处的信息环境呈现出不必要的低信噪比特征,这助长了监管俘获现象,并在治理视角下应优先应对哪些风险的问题上造成了深刻的不确定性与分歧。我们认为,当前人工智能领域快速的发表节奏与薄弱的可复现性规范所导致的科学标准缺失,实质上削弱了政策制定者实施有效政策与治理协议的能力。本文阐述了人工智能研究如何通过采用更严格的可复现性准则来辅助治理工作,并提升对人工智能风险格局的共识。我们通过借鉴其他科学领域的危机案例,评估了人工智能研究即将面临的可复现性危机;并就采用预注册、提升统计功效及发表阴性结果等可复现性协议如何赋能有效的人工智能治理进行了评述。尽管我们坚持认为,鉴于人工智能的重大社会影响,其治理必须具备反应性,但我们主张政策制定者与政府必须将可复现性协议视为治理工具箱的核心工具,并要求人工智能研究遵循更高标准。数据与图表复现代码:https://github.com/IFMW01/reproducibility-the-new-frontier-in-ai-governance