Developing human-controllable artificial intelligence (AI) and achieving meaningful human control (MHC) has become a vital principle to address these challenges, ensuring ethical alignment and effective governance in AI. MHC is also a critical focus in human-centered AI (HCAI) research and application. This chapter systematically examines MHC in AI, articulating its foundational principles and future trajectory. MHC is not simply the right to operate, but the unity of human understanding, intervention, and the traceablity of responsibility in AI decision-making, which requires technological design, AI governance, and humans to play a role together. MHC ensures AI autonomy serves humans without constraining technological progress. The mode of human control needs to match the levels of technology, and human supervision should balance the trust and doubt of AI. For future AI systems, MHC mandates human controllability as a prerequisite, requiring: (1) technical architectures with embedded mechanisms for human control; (2) human-AI interactions optimized for better access to human understanding; and (3) the evolution of AI systems harmonizing intelligence and human controllability. Governance must prioritize HCAI strategies: policies balancing innovation and risk mitigation, human-centered participatory frameworks transcending technical elite dominance, and global promotion of MHC as a universal governance paradigm to safeguard HCAI development. Looking ahead, there is a need to strengthen interdisciplinary research on the controllability of AI systems, enhance ethical and legal awareness among stakeholders, moving beyond simplistic technology design perspectives, focus on the knowledge construction, complexity interpretation, and influencing factors surrounding human control. By fostering MHC, the development of human-controllable AI can be further advanced, delivering HCAI systems.
翻译:发展人类可控的人工智能(AI)并实现有意义的人类控制(MHC)已成为应对这些挑战、确保AI伦理对齐与有效治理的重要原则。MHC也是以人为中心的人工智能(HCAI)研究与应用的焦点。本章系统探讨AI中的MHC,阐明其基本原则与未来方向。MHC不仅是操作权限,更是人类对AI决策的理解、干预与责任可追溯性的统一,这需要技术设计、AI治理与人类角色共同发挥作用。MHC确保AI自主性服务于人类而不限制技术进步。人类控制模式需与技术层级匹配,人类监督应平衡对AI的信任与质疑。对于未来AI系统,MHC要求将人类可控性作为前提,需要:(1)嵌入人类控制机制的技术架构;(2)优化人机交互以提升人类理解的可及性;(3)协调智能与人类可控性的AI系统演进。治理必须优先采取HCAI策略:平衡创新与风险缓解的政策、超越技术精英主导的以人为中心的参与框架,以及在全球推广MHC作为普适治理范式以保障HCAI发展。展望未来,需加强AI系统可控性的跨学科研究,提升利益相关者的伦理与法律意识,超越单纯技术设计视角,聚焦人类控制相关的知识构建、复杂性阐释与影响因素。通过培育MHC,可进一步推进人类可控AI的发展,实现HCAI系统。