Recent advancements in deep learning and the availability of high-quality real-world driving datasets have propelled end-to-end autonomous driving. Despite this progress, relying solely on real-world data limits the variety of driving scenarios for training. Synthetic scenario generation has emerged as a promising solution to enrich the diversity of training data; however, its application within E2E AD models remains largely unexplored. This is primarily due to the absence of a designated ego vehicle and the associated sensor inputs, such as camera or LiDAR, typically provided in real-world scenarios. To address this gap, we introduce SynAD, the first framework designed to enhance real-world E2E AD models using synthetic data. Our method designates the agent with the most comprehensive driving information as the ego vehicle in a multi-agent synthetic scenario. We further project path-level scenarios onto maps and employ a newly developed Map-to-BEV Network to derive bird's-eye-view features without relying on sensor inputs. Finally, we devise a training strategy that effectively integrates these map-based synthetic data with real driving data. Experimental results demonstrate that SynAD effectively integrates all components and notably enhances safety performance. By bridging synthetic scenario generation and E2E AD, SynAD paves the way for more comprehensive and robust autonomous driving models.
翻译:深度学习的最新进展以及高质量真实世界驾驶数据集的可用性推动了端到端自动驾驶的发展。尽管取得了这些进展,仅依赖真实世界数据限制了训练驾驶场景的多样性。合成场景生成已成为丰富训练数据多样性的一个有前景的解决方案;然而,其在端到端自动驾驶模型中的应用在很大程度上仍未得到探索。这主要是由于缺乏指定的自车以及通常在真实世界场景中提供的相关传感器输入,例如摄像头或激光雷达。为了填补这一空白,我们引入了SynAD,这是首个旨在利用合成数据增强真实世界端到端自动驾驶模型的框架。我们的方法将多智能体合成场景中具有最全面驾驶信息的智能体指定为自车。我们进一步将路径级场景投影到地图上,并采用新开发的Map-to-BEV网络来推导鸟瞰图特征,而无需依赖传感器输入。最后,我们设计了一种训练策略,有效地将这些基于地图的合成数据与真实驾驶数据相结合。实验结果表明,SynAD有效地集成了所有组件,并显著提升了安全性能。通过桥接合成场景生成与端到端自动驾驶,SynAD为更全面、更鲁棒的自动驾驶模型铺平了道路。