As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO), who are involved in designing and developing AI systems for many purposes. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government. The principles comprise: (1) privacy protection and security, (2) reliability and safety, (3) transparency and explainability, (4) fairness, (5) contestability, (6) accountability, (7) human-centred values, (8) human, social and environmental wellbeing. Discussions on the gained insights from the interviews include various tensions and trade-offs between the principles, and provide suggestions for implementing each high-level principle. We also propose a set of recommendations aiming to build organisational capacity and capability for being ethical with AI.
翻译:由于在各种已公布的大赦国际道德原则之间达成了共识,在高级别原则与实际技术之间仍然存在差距,这些技术可以随时用于设计和开发负责任的大赦国际系统。我们研究了澳大利亚国家科学研究机构(CSIRO)的研究人员和工程师的实践和经验,他们为多种目的参与设计和开发大赦国际系统。半结构性访谈用于审查参与者的做法与澳大利亚政府提出的一套高级别大赦国际道德原则的关系和一致性。这些原则包括:(1) 隐私保护和保障;(2) 可靠性和安全;(3) 透明度和可解释性;(4) 公平性;(5) 可竞争性;(6) 问责制;(7) 以人为中心的价值观;(8) 人、社会和环境福祉。关于访谈所得见解的讨论包括这些原则之间的各种紧张关系和权衡,并就执行每项高级别原则提出建议。我们还提出了一套建议,旨在建设组织能力和与大赦国际保持道德操守的能力。