Advanced AI systems sometimes act in ways that differ from human intent. To gather clear, reproducible examples, we ran the Misalignment Bounty: a crowdsourced project that collected cases of agents pursuing unintended or unsafe goals. The bounty received 295 submissions, of which nine were awarded. This report explains the program's motivation and evaluation criteria, and walks through the nine winning submissions step by step.
翻译:先进AI系统有时会表现出与人类意图不符的行为。为收集清晰、可复现的实例,我们启动了'错位悬赏'项目:通过众包方式征集智能体追求非预期或不安全目标的案例。该项目共收到295份提交,其中9份获得奖励。本报告阐述了项目的设立动机与评估标准,并逐步解析了九个获奖案例。