To navigate complex environments, robots must increasingly use high-dimensional visual feedback (e.g. images) for control. However, relying on high-dimensional image data to make control decisions raises important questions; particularly, how might we prove the safety of a visual-feedback controller? Control barrier functions (CBFs) are powerful tools for certifying the safety of feedback controllers in the state-feedback setting, but CBFs have traditionally been poorly-suited to visual feedback control due to the need to predict future observations in order to evaluate the barrier function. In this work, we solve this issue by leveraging recent advances in neural radiance fields (NeRFs), which learn implicit representations of 3D scenes and can render images from previously-unseen camera perspectives, to provide single-step visual foresight for a CBF-based controller. This novel combination is able to filter out unsafe actions and intervene to preserve safety. We demonstrate the effect of our controller in real-time simulation experiments where it successfully prevents the robot from taking dangerous actions.
翻译:要浏览复杂的环境,机器人必须越来越多地使用高维视觉反馈(例如图像)来进行控制。然而,依靠高维图像数据来做出控制决策,却提出了重要问题;特别是,我们如何证明视觉反馈控制器的安全性?控制屏障功能(CBFs)是证明国家反馈控制器安全的有力工具,但CBFs历来不适于进行视觉反馈控制,因为需要预测未来观测以评价屏障功能。在这项工作中,我们利用神经光谱场(NERFs)最近的进展来解决这个问题,这些光谱场学习了3D场的隐性显示,能够从以前看不见的摄像头的角度提供图像,为基于CBF的控制器提供单步视觉展望。这种新型组合能够过滤不安全的行动并进行干预以维护安全。我们展示了我们的控制器在实时模拟实验中的效果,它成功地阻止机器人采取危险的行动。