Vision-language models have recently emerged as promising planners for autonomous driving, where success hinges on topology-aware reasoning over spatial structure and dynamic interactions from multimodal input. However, existing models are typically trained without supervision that explicitly encodes these relational dependencies, limiting their ability to infer how agents and other traffic entities influence one another from raw sensor data. In this work, we bridge this gap with a novel model-agnostic method that conditions language-based driving models on structured relational context in the form of traffic scene graphs. We serialize scene graphs at various abstraction levels and formats, and incorporate them into the models via structured prompt templates, enabling a systematic analysis of when and how relational supervision is most beneficial. Extensive evaluations on the public LangAuto benchmark show that scene graph conditioning of state-of-the-art approaches yields large and persistent improvement in driving performance. Notably, we observe up to a 15.6\% increase in driving score for LMDrive and 17.5\% for BEVDriver, indicating that models can better internalize and ground relational priors through scene graph-conditioned training, even without requiring scene graph input at test-time. Code, fine-tuned models, and our scene graph dataset are publicly available at https://github.com/iis-esslingen/GraphPilot.
翻译:视觉-语言模型近期已成为自动驾驶领域有前景的规划器,其成功关键在于对多模态输入中空间结构与动态交互的拓扑感知推理。然而,现有模型通常缺乏显式编码这些关系依赖的监督训练,限制了其从原始传感器数据推断交通参与者与其他实体间相互影响的能力。本研究通过一种新颖的模型无关方法弥补这一缺陷,该方法将基于语言的驾驶模型置于以交通场景图形式呈现的结构化关系上下文中进行条件化。我们通过不同抽象层级与格式序列化场景图,并借助结构化提示模板将其整合至模型中,从而系统分析关系监督在何时及如何发挥最大效用。在公开基准测试LangAuto上的广泛评估表明,对前沿方法进行场景图条件化能显著且持续提升驾驶性能。值得注意的是,LMDrive的驾驶评分最高提升15.6%,BEVDriver提升17.5%,这表明模型能通过场景图条件化训练更好地内化并锚定关系先验,即使在测试阶段无需输入场景图。代码、微调模型及场景图数据集已公开于https://github.com/iis-esslingen/GraphPilot。