How can we train an assistive human-machine interface (e.g., an electromyography-based limb prosthesis) to translate a user's raw command signals into the actions of a robot or computer when there is no prior mapping, we cannot ask the user for supervision in the form of action labels or reward feedback, and we do not have prior knowledge of the tasks the user is trying to accomplish? The key idea in this paper is that, regardless of the task, when an interface is more intuitive, the user's commands are less noisy. We formalize this idea as a completely unsupervised objective for optimizing interfaces: the mutual information between the user's command signals and the induced state transitions in the environment. To evaluate whether this mutual information score can distinguish between effective and ineffective interfaces, we conduct an observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games. The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains, with an average Spearman's rank correlation of 0.43. In addition to offline evaluation of existing interfaces, we use our unsupervised objective to learn an interface from scratch: we randomly initialize the interface, have the user attempt to perform their desired tasks using the interface, measure the mutual information score, and update the interface to maximize mutual information through reinforcement learning. We evaluate our method through a user study with 12 participants who perform a 2D cursor control task using a perturbed mouse, and an experiment with one user playing the Lunar Lander game using hand gestures. The results show that we can learn an interface from scratch, without any user supervision or prior knowledge of tasks, in under 30 minutes.
翻译:我们怎样才能训练一个辅助性人体机器界面(例如,以电磁法为基础的肢体假肢假肢)来将用户的原始指令信号转换成机器人或计算机的动作,而没有先前的映射,我们无法要求用户以动作标签或奖励反馈的形式进行监督,而且我们没有事先了解用户要完成的任务?本文的关键想法是,无论接口是否更直观,用户的用户界面指令也不太吵。我们把这个想法正式化成一个完全不受监督的监督目标,优化界面:用户的原始命令信号与环境中的诱发状态转换之间的相互信息。为了评估这种共同信息评分能否区分有效和无效的界面,我们对540K用户操作各种键盘和眼视屏界面以打字、控制模拟机器人和玩视频游戏为目的进行观察研究。结果显示,我们的任何共同信息评分都是从地面和地面任务完成指标的预测,在不同的区域中,我们用平均的Spearman界面向当前的目标3 学习一个不透明的方法。我们从现有的Scarman界面到一个不连续的比对目标进行对比分析。