Organizations that develop and deploy artificial intelligence (AI) systems need to take measures to reduce the associated risks. In this paper, we examine how AI companies could design an AI ethics board in a way that reduces risks from AI. We identify five high-level design choices: (1) What responsibilities should the board have? (2) What should its legal structure be? (3) Who should sit on the board? (4) How should it make decisions and should its decisions be binding? (5) What resources does it need? We break down each of these questions into more specific sub-questions, list options, and discuss how different design choices affect the board's ability to reduce risks from AI. Several failures have shown that designing an AI ethics board can be challenging. This paper provides a toolbox that can help AI companies to overcome these challenges.
翻译:组织开发和部署人工智能系统需要采取措施来降低相关风险。在本文中,我们探讨了人工智能公司如何设计一个人工智能伦理委员会,从而降低人工智能带来的风险。我们确定了五个高层次的设计选择:(1)委员会应该拥有什么职责?(2)它的法律结构应该是什么?(3)谁应该坐在委员会上?(4)它应该如何做决定,并且它的决定是否具有约束力?(5)它需要什么资源?我们将每个问题分解为更具体的子问题、列出选项,并讨论不同设计选择如何影响委员会降低人工智能风险的能力。一些失败案例表明,设计一个人工智能伦理委员会可能是具有挑战性的。本文提供了一个工具箱,可以帮助人工智能公司克服这些挑战。