The AI landscape demands a broad set of legal, ethical, and societal considerations to be accounted for in order to develop ethical AI (eAI) solutions which sustain human values and rights. Currently, a variety of guidelines and a handful of niche tools exist to account for and tackle individual challenges. However, it is also well established that many organizations face practical challenges in navigating these considerations from a risk management perspective. Therefore, new methodologies are needed to provide a well-vetted and real-world applicable structure and path through the checks and balances needed for ethically assessing and guiding the development of AI. In this paper we show that a multidisciplinary research approach, spanning cross-sectional viewpoints, is the foundation of a pragmatic definition of ethical and societal risks faced by organizations using AI. Equally important is the findings of cross-structural governance for implementing eAI successfully. Based on evidence acquired from our multidisciplinary research investigation, we propose a novel data-driven risk assessment methodology, entitled DRESS-eAI. In addition, through the evaluation of our methodological implementation, we demonstrate its state-of-the-art relevance as a tool for sustaining human values in the data-driven AI era.
翻译:需要一套广泛的法律、伦理和社会考虑,以便制定维护人类价值和权利的道德的AI(eAI)解决方案。目前,存在着各种指导方针和一些独特的工具来应对和应对个人挑战。然而,人们也清楚地认识到,许多组织在从风险管理角度处理这些考虑方面面临实际挑战。因此,需要采用新的方法,通过道德评估和指导AI的发展所需的制衡制衡办法,提供一个经过周密审查的、现实世界适用的架构和途径。我们在本文件中表明,跨部门观点的多学科研究方法,是使用AI的组织所面临的道德和社会风险的务实定义的基础。同样重要的是,为了成功实施eAI,跨结构管理的结论。根据我们多学科研究调查获得的证据,我们提出了一个新的数据驱动风险评估方法,题为DRESS-eAI。此外,通过评估我们的方法执行情况,我们展示了它作为数据驱动AI时代维持人类价值观的工具的先进相关性。