Partial perception deficits can compromise autonomous vehicle safety by disrupting environmental understanding. Existing protocols typically default to entirely risk-avoidant actions such as immediate stops, which are detrimental to navigation goals and lack flexibility for rare driving scenarios. Yet, in cases of minor risk, halting the vehicle may be unnecessary, and more adaptive responses are preferable. In this paper, we propose LLM-RCO, a risk-averse framework leveraging large language models (LLMs) to integrate human-like driving commonsense into autonomous systems facing perception deficits. LLM-RCO features four key modules interacting with the dynamic driving environment: hazard inference, short-term motion planner, action condition verifier, and safety constraint generator, enabling proactive and context-aware actions in such challenging conditions. To enhance the driving decision-making of LLMs, we construct DriveLM-Deficit, a dataset of 53,895 video clips featuring deficits of safety-critical objects, annotated for LLM fine-tuning in hazard detection and motion planning. Extensive experiments in adverse driving conditions with the CARLA simulator demonstrate that LLM-RCO promotes proactive maneuvers over purely risk-averse actions in perception deficit scenarios, underscoring its value for boosting autonomous driving resilience against perception loss challenges.
翻译:部分感知缺陷会破坏对环境的理解,从而危及自动驾驶车辆的安全性。现有协议通常默认采取完全规避风险的行动,例如立即停车,这对导航目标不利且缺乏应对罕见驾驶场景的灵活性。然而,在风险较小的情况下,停车可能并非必要,更自适应的响应更为可取。本文提出LLM-RCO,一种风险规避框架,利用大型语言模型(LLMs)将类人驾驶常识整合到面临感知缺陷的自动驾驶系统中。LLM-RCO包含四个与动态驾驶环境交互的关键模块:危险推断、短期运动规划器、动作条件验证器和安全约束生成器,从而在此类挑战性条件下实现主动且情境感知的行动。为增强LLMs的驾驶决策能力,我们构建了DriveLM-Deficit数据集,包含53,895个涉及安全关键对象缺陷的视频片段,并标注用于LLM在危险检测和运动规划方面的微调。在CARLA模拟器中进行的恶劣驾驶条件下的广泛实验表明,在感知缺陷场景中,LLM-RCO促进了主动操控而非纯粹的风险规避行动,突显了其在提升自动驾驶系统应对感知损失挑战的韧性方面的价值。