The integrity of many contemporary AI systems is compromised by the misuse of Human-in-the-Loop (HITL) models to obscure systems that remain heavily dependent on human labor. We define this structural dependency as Human-Instead-of-AI (HISOAI), an ethically problematic and economically unsustainable design in which human workers function as concealed operational substitutes rather than intentional, high-value collaborators. To address this issue, we introduce the AI-First, Human-Empowered (AFHE) paradigm, which requires AI systems to demonstrate a quantifiable level of functional independence prior to deployment. This requirement is formalized through the AI Autonomy Coefficient, measuring the proportion of tasks completed without mandatory human intervention. We further propose the AFHE Deployment Algorithm, an algorithmic gate that enforces a minimum autonomy threshold during offline evaluation and shadow deployment. Our results show that the AI Autonomy Coefficient effectively identifies HISOAI systems with an autonomy level of 0.38, while systems governed by the AFHE framework achieve an autonomy level of 0.85. We conclude that AFHE provides a metric-driven approach for ensuring verifiable autonomy, transparency, and sustainable operational integrity in modern AI systems.
翻译:当前许多AI系统的完整性因滥用人在回路(HITL)模型而受损,这些模型掩盖了系统对人类劳动力的高度依赖。我们将这种结构性依赖定义为“人替代AI”(HISOAI),这是一种在伦理上有问题且经济上不可持续的设计,其中人类工作者充当隐蔽的操作替代者,而非有意识的高价值协作者。为解决此问题,我们提出“AI优先、人类赋能”(AFHE)范式,要求AI系统在部署前展示可量化的功能独立性。这一要求通过AI自主系数形式化,该系数衡量无需强制人工干预即可完成的任务比例。我们进一步提出AFHE部署算法,作为一种算法门控机制,在离线评估和影子部署期间强制执行最低自主阈值。实验结果表明,AI自主系数能有效识别自主水平为0.38的HISOAI系统,而遵循AFHE框架的系统自主水平可达0.85。我们得出结论:AFHE为现代AI系统提供了一种基于度量的方法,以确保可验证的自主性、透明度和可持续的操作完整性。