Simulators can generate virtually unlimited driving data, yet imitation learning policies in simulation still struggle to achieve robust closed-loop performance. Motivated by this gap, we empirically study how misalignment between privileged expert demonstrations and sensor-based student observations can limit the effectiveness of imitation learning. More precisely, experts have significantly higher visibility (e.g., ignoring occlusions) and far lower uncertainty (e.g., knowing other vehicles' actions), making them difficult to imitate reliably. Furthermore, navigational intent (i.e., the route to follow) is under-specified in student models at test time via only a single target point. We demonstrate that these asymmetries can measurably limit driving performance in CARLA and offer practical interventions to address them. After careful modifications to narrow the gaps between expert and student, our TransFuser v6 (TFv6) student policy achieves a new state of the art on all major publicly available CARLA closed-loop benchmarks, reaching 95 DS on Bench2Drive and more than doubling prior performances on Longest6~v2 and Town13. Additionally, by integrating perception supervision from our dataset into a shared sim-to-real pipeline, we show consistent gains on the NAVSIM and Waymo Vision-Based End-to-End driving benchmarks. Our code, data, and models are publicly available at https://github.com/autonomousvision/lead.
翻译:模拟器能够生成几乎无限的驾驶数据,然而模拟中的模仿学习策略在实现鲁棒的闭环性能方面仍然面临困难。受此差距启发,我们通过实证研究探讨了特权专家演示与基于传感器的学生观测之间的错位如何限制模仿学习的有效性。更具体地说,专家具有显著更高的可见性(例如忽略遮挡)和远低于学生的不确定性(例如知晓其他车辆的动作),这使得学生难以可靠地模仿。此外,导航意图(即要遵循的路线)在学生模型中仅通过单一目标点在测试时被欠指定。我们证明这些不对称性可在CARLA中可测量地限制驾驶性能,并提出实用的干预措施以解决这些问题。经过精心修改以缩小专家与学生之间的差距后,我们的TransFuser v6(TFv6)学生策略在所有主要公开可用的CARLA闭环基准测试中达到了新的最优水平,在Bench2Drive上达到95 DS,并在Longest6~v2和Town13上将先前性能提升了一倍以上。此外,通过将我们数据集中的感知监督集成到共享的模拟到现实流程中,我们在NAVSIM和Waymo基于视觉的端到端驾驶基准测试中展示了一致的性能提升。我们的代码、数据和模型已在https://github.com/autonomousvision/lead公开提供。