Vision-Language-Action (VLA) models have emerged as a promising framework for enabling generalist robots capable of perceiving, reasoning, and acting in the real world. These models usually build upon pretrained Vision-Language Models (VLMs), which excel at semantic understanding due to large-scale image and text pretraining. However, existing VLMs typically lack precise spatial understanding capabilities, as they are primarily tuned on 2D image-text pairs without 3D supervision. To address this limitation, recent approaches have incorporated explicit 3D inputs such as point clouds or depth maps, but this necessitates additional depth sensors or pre-trained depth estimation models, which may yield defective results. In contrast, our work introduces a plug-and-play module that implicitly incorporates 3D geometry features into VLA models by leveraging an off-the-shelf visual geometry foundation model. This integration provides the model with depth-aware visual representations, improving its ability to understand the geometric structure of the scene and the spatial relationships among objects from RGB images alone. We evaluate our method on a set of spatially challenging tasks in both simulation and the real world. Extensive evaluations show that our method significantly improves the performance of state-of-the-art VLA models across diverse scenarios.
翻译:视觉-语言-动作(VLA)模型已成为实现通用机器人感知、推理与在现实世界中行动的有前景框架。这些模型通常基于预训练的视觉-语言模型(VLM)构建,后者得益于大规模图像与文本预训练,在语义理解方面表现卓越。然而,现有VLM通常缺乏精确的空间理解能力,因为它们主要基于二维图像-文本对进行调优,缺乏三维监督。为应对这一局限,近期研究引入了显式三维输入(如点云或深度图),但这需要额外的深度传感器或预训练的深度估计模型,且可能产生有缺陷的结果。相比之下,本研究提出了一种即插即用模块,通过利用现成的视觉几何基础模型,将三维几何特征隐式整合到VLA模型中。该整合为模型提供了深度感知的视觉表征,提升了其仅从RGB图像理解场景几何结构及物体间空间关系的能力。我们在仿真和现实世界中的一系列空间挑战性任务上评估了该方法。大量实验表明,我们的方法显著提升了最先进VLA模型在多样化场景中的性能。