When robots interact with humans in homes, roads, or factories the human's behavior often changes in response to the robot. Non-stationary humans are challenging for robot learners: actions the robot has learned to coordinate with the original human may fail after the human adapts to the robot. In this paper we introduce an algorithmic formalism that enables robots (i.e., ego agents) to co-adapt alongside dynamic humans (i.e., other agents) using only the robot's low-level states, actions, and rewards. A core challenge is that humans not only react to the robot's behavior, but the way in which humans react inevitably changes both over time and between users. To deal with this challenge, our insight is that -- instead of building an exact model of the human -- robots can learn and reason over high-level representations of the human's policy and policy dynamics. Applying this insight we develop RILI: Robustly Influencing Latent Intent. RILI first embeds low-level robot observations into predictions of the human's latent strategy and strategy dynamics. Next, RILI harnesses these predictions to select actions that influence the adaptive human towards advantageous, high reward behaviors over repeated interactions. We demonstrate that -- given RILI's measured performance with users sampled from an underlying distribution -- we can probabilistically bound RILI's expected performance across new humans sampled from the same distribution. Our simulated experiments compare RILI to state-of-the-art representation and reinforcement learning baselines, and show that RILI better learns to coordinate with imperfect, noisy, and time-varying agents. Finally, we conduct two user studies where RILI co-adapts alongside actual humans in a game of tag and a tower-building task. See videos of our user studies here: https://youtu.be/WYGO5amDXbQ
翻译:当机器人与人类在家庭、道路或工厂中互动时,人类的行为往往会随着机器人的反应而变化。非静止人类对于机器人学习者来说具有挑战性:机器人学会了与原始人类协调的行动在人类适应机器人之后可能会失败。在本文中,我们引入了一种算法形式主义,使机器人(即自我代理商)能够与动态人类(即其他代理商)一起与动态人类(即,其他代理商)共同适应,仅使用机器人的低级别状态、行动和回报。核心挑战在于,人类不仅对机器人的行为作出反应,而且对人类反应的方式不可避免地在时间和用户之间发生变化。为了应对这一挑战,我们的认识是:机器人(即自我代理代理代理商)能够学习和动态人类(即其他代理商)高层次的描述。运用这种洞察力,我们在这里学会了相同的时间感化预知力,里里里尔(RVII) 开始将低层次的机器人观察结果嵌入人类潜值的预测中, 以及人类反应方式会不可避免地改变时间分布策略和战略动态。接下来,里里里尔(RILA)可以反复进行这些预估的预言。