As assistive and collaborative robots become more ubiquitous in the real-world, we need to develop interfaces and controllers that are safe for users to build trust and encourage adoption. In this Blue Sky paper, we discuss the need for co-evolving task and user-specific safety controllers that can accommodate people's safety preferences. We argue that while most adaptive controllers focus on behavioral adaptation, safety adaptation is also a major consideration for building trust in collaborative systems. Furthermore, we highlight the need for adaptation over time, to account for user's changes in preferences as experience and trust builds. We provide a general formulation for what these interfaces should look like and what features are necessary for making them feasible and successful. In this formulation, users provide demonstrations and labelled safety ratings from which a safety value function is learned. These value functions can be updated by updating the safety labels on demonstrations to learn an updated function. We discuss how this can be implemented at a high-level, as well as some promising approaches and techniques for enabling this.
翻译:随着辅助和协作机器人在现实世界中越来越普遍,我们需要开发用户可以安全地建立信任和鼓励采纳的界面和控制器。在本蓝天文件中,我们讨论了需要共同发展的任务和用户特有的安全控制器,以适应人们的安全偏好。我们争辩说,虽然大多数适应性控制器侧重于行为适应,但安全适应也是在合作系统中建立信任的一个主要考虑。此外,我们强调需要随着时间的变化进行调整,以考虑到用户在偏好方面的改变,因为经验和信任正在积累。我们对这些界面的外观提供了一般的配方,以及使其可行和成功所需的特征。在这一配方中,用户提供演示和标签安全评级,从中学习安全价值功能。这些价值功能可以通过更新演示的安全标签来更新,学习更新功能。我们讨论如何在高层次上落实这一点,以及一些有希望的方法和技术来促成这一点。