We propose a human-centered safety filter (HCSF) for shared autonomy that significantly enhances system safety without compromising human agency. Our HCSF is built on a neural safety value function, which we first learn scalably through black-box interactions and then use at deployment to enforce a novel state-action control barrier function (Q-CBF) safety constraint. Since this Q-CBF safety filter does not require any knowledge of the system dynamics for both synthesis and runtime safety monitoring and intervention, our method applies readily to complex, black-box shared autonomy systems. Notably, our HCSF's CBF-based interventions modify the human's actions minimally and smoothly, avoiding the abrupt, last-moment corrections delivered by many conventional safety filters. We validate our approach in a comprehensive in-person user study using Assetto Corsa-a high-fidelity car racing simulator with black-box dynamics-to assess robustness in "driving on the edge" scenarios. We compare both trajectory data and drivers' perceptions of our HCSF assistance against unassisted driving and a conventional safety filter. Experimental results show that 1) compared to having no assistance, our HCSF improves both safety and user satisfaction without compromising human agency or comfort, and 2) relative to a conventional safety filter, our proposed HCSF boosts human agency, comfort, and satisfaction while maintaining robustness.
翻译:我们提出了一种用于共享自主权的人为中心的安全过滤器(HCSF),该过滤器在不损害人类能动性的前提下显著提升系统安全性。我们的HCSF基于神经安全价值函数构建,首先通过黑盒交互进行可扩展学习,随后在部署时用于强制执行一种新颖的状态-动作控制屏障函数(Q-CBF)安全约束。由于该Q-CBF安全过滤器在系统综合与运行时安全监测及干预中均无需任何系统动力学知识,我们的方法可直接应用于复杂的黑盒共享自主系统。值得注意的是,基于CBF的干预措施以最小且平滑的方式修正人类操作,避免了传统安全过滤器常见的突发性、最终时刻的强制校正。我们通过一项全面的现场用户研究验证了该方法,使用具有黑盒动力学特性的高保真赛车模拟器《神力科莎》,评估其在“极限驾驶”场景中的鲁棒性。我们对比了轨迹数据及驾驶员对HCSF辅助的感知,并与无辅助驾驶及传统安全过滤器进行对比。实验结果表明:1)相较于无辅助驾驶,HCSF在提升安全性与用户满意度的同时未损害人类能动性或舒适度;2)相对于传统安全过滤器,我们提出的HCSF在保持鲁棒性的同时显著增强了人类能动性、舒适度与满意度。