Shared autonomy functions as a flexible framework that empowers robots to operate across a spectrum of autonomy levels, allowing for efficient task execution with minimal human oversight. However, humans might be intimidated by the autonomous decision-making capabilities of robots due to perceived risks and a lack of trust. This paper proposed a trust-preserved shared autonomy strategy that allows robots to seamlessly adjust their autonomy level, striving to optimize team performance and enhance their acceptance among human collaborators. By enhancing the relational event modeling framework with Bayesian learning techniques, this paper enables dynamic inference of human trust based solely on time-stamped relational events communicated within human-robot teams. Adopting a longitudinal perspective on trust development and calibration in human-robot teams, the proposed trust-preserved shared autonomy strategy warrants robots to actively establish, maintain, and repair human trust, rather than merely passively adapting to it. We validate the effectiveness of the proposed approach through a user study on a human-robot collaborative search and rescue scenario. The objective and subjective evaluations demonstrate its merits on both task execution and user acceptability over the baseline approach that does not consider the preservation of trust.
翻译:共享自主性作为一种灵活框架,使机器人能够在不同自主水平下运行,从而以最少的人力监督高效执行任务。然而,由于感知到的风险及信任缺失,人类可能对机器人的自主决策能力产生畏惧。本文提出了一种信任保持型共享自主性策略,使机器人能够无缝调整其自主水平,旨在优化团队绩效并提升人类合作者的接受度。通过将贝叶斯学习技术融入关系事件建模框架,本研究实现了仅基于人机团队内带时间戳的关系事件进行动态推断人类信任。采用对人机团队中信任发展与校准的纵向视角,所提出的信任保持型共享自主性策略确保机器人主动建立、维持并修复人类信任,而非仅被动适应。我们在人机协作搜索与救援场景中通过用户研究验证了所提方法的有效性。客观与主观评估表明,相较于未考虑信任保持的基线方法,本方法在任务执行和用户接受度方面均展现出优势。