Interactive reinforcement learning (IRL) has shown promise in enabling autonomous agents and robots to learn complex behaviours from human teachers, yet the dynamics of teacher selection remain poorly understood. This paper reveals an unexpected phenomenon in IRL: when given a choice between teachers with different reward structures, learning agents overwhelmingly prefer conservative, low-reward teachers (93.16% selection rate) over those offering 20x higher rewards. Through 1,250 experimental runs in navigation tasks with multiple expert teachers, we discovered: (1) Conservative bias dominates teacher selection: agents systematically choose the lowest-reward teacher, prioritising consistency over optimality; (2) Critical performance thresholds exist at teacher availability rho >= 0.6 and accuracy omega >= 0.6, below which the framework fails catastrophically; (3) The framework achieves 159% improvement over baseline Q-learning under concept drift. These findings challenge fundamental assumptions about optimal teaching in RL and suggest potential implications for human-robot collaboration, where human preferences for safety and consistency may align with the observed agent selection behaviour, potentially informing training paradigms for safety-critical robotic applications.
翻译:交互式强化学习(IRL)在使自主智能体和机器人能够从人类教师处学习复杂行为方面展现出潜力,然而教师选择的动态机制仍不甚明晰。本文揭示了IRL中一个意外现象:当面对具有不同回报结构的教师时,学习智能体压倒性地偏好保守型低回报教师(选择率达93.16%),而非那些提供20倍更高回报的教师。通过在具有多位专家教师的导航任务中进行1,250次实验运行,我们发现:(1)保守性偏见主导教师选择:智能体系统性地选择最低回报教师,将一致性置于最优性之上;(2)存在关键性能阈值:教师可用率ρ ≥ 0.6与准确率ω ≥ 0.6,低于该阈值时框架将发生灾难性失效;(3)在概念漂移条件下,该框架相较基准Q学习实现了159%的性能提升。这些发现挑战了关于强化学习中最优教学的基本假设,并暗示其对人机协作的潜在影响——人类对安全性和一致性的偏好可能与观察到的智能体选择行为相契合,这可能为安全关键型机器人应用的训练范式提供启示。