This research applies Harold Demsetz's concept of the nirvana approach to the realm of AI governance and debunks three common fallacies in various AI policy proposals--"the grass is always greener on the other side," "free lunch," and "the people could be different." Through this, I expose fundamental flaws in the current AI regulatory proposal. First, some commentators intuitively believe that people are more reliable than machines and that government works better in risk control than companies' self-regulation, but they do not fully compare the differences between the status quo and the proposed replacements. Second, when proposing some regulatory tools, some policymakers and researchers do not realize and even gloss over the fact that harms and costs are also inherent in their proposals. Third, some policy proposals are initiated based on a false comparison between the AI-driven world, where AI does lead to some risks, and an entirely idealized world, where no risk exists at all. However, the appropriate approach is to compare the world where AI causes risks to the real world where risks are everywhere, but people can live well with these risks. The prevalence of these fallacies in AI governance underscores a broader issue: the tendency to idealize potential solutions without fully considering their real-world implications. This idealization can lead to regulatory proposals that are not only impractical but potentially harmful to innovation and societal progress.
翻译:本研究将哈罗德·德姆塞茨的涅槃式方法概念应用于人工智能治理领域,并揭示了各类人工智能政策提案中常见的三大谬误——‘邻家芳草绿’谬误、‘免费午餐’谬误以及‘人性可塑’谬误。通过这一分析,本文揭示了当前人工智能监管提案中的根本性缺陷。首先,部分评论者直觉性地认为人类比机器更可靠,且政府在风险控制方面优于企业的自我监管,但他们并未充分比较现状与所提议替代方案之间的差异。其次,在提出某些监管工具时,部分政策制定者和研究者未能意识到甚至刻意忽略其提案本身亦存在固有危害与成本。第三,部分政策提案基于错误的对比框架:一边是人工智能确实引发某些风险的技术驱动世界,另一边则是完全理想化的零风险世界。然而,恰当的方法应当是将人工智能引发风险的世界,与风险无处不在但人类仍能与之共存的现实世界进行对比。这些谬误在人工智能治理中的盛行突显了一个更广泛的问题:在未充分考虑现实影响的情况下,倾向于将潜在解决方案理想化。这种理想化倾向可能导致监管提案不仅不切实际,更可能对创新与社会进步产生潜在危害。