Vision-Language-Action (VLA) models aim to predict robotic actions based on visual observations and language instructions. Existing approaches require fine-tuning pre-trained visionlanguage models (VLMs) as visual and language features are independently fed into downstream policies, degrading the pre-trained semantic alignments. We propose OTTER, a novel VLA architecture that leverages these existing alignments through explicit, text-aware visual feature extraction. Instead of processing all visual features, OTTER selectively extracts and passes only task-relevant visual features that are semantically aligned with the language instruction to the policy transformer. This allows OTTER to keep the pre-trained vision-language encoders frozen. Thereby, OTTER preserves and utilizes the rich semantic understanding learned from large-scale pre-training, enabling strong zero-shot generalization capabilities. In simulation and real-world experiments, OTTER significantly outperforms existing VLA models, demonstrating strong zeroshot generalization to novel objects and environments. Video, code, checkpoints, and dataset: https://ottervla.github.io/.
翻译:视觉-语言-动作模型旨在根据视觉观察和语言指令预测机器人动作。现有方法通常需要微调预训练的视觉-语言模型,因为视觉和语言特征被独立输入下游策略网络,这会破坏预训练获得的语义对齐关系。我们提出了OTTER,一种新颖的VLA架构,它通过显式的、文本感知的视觉特征提取来充分利用这些已有的对齐关系。OTTER并非处理所有视觉特征,而是选择性地提取并仅将与语言指令语义对齐的任务相关视觉特征传递给策略Transformer。这使得OTTER能够保持预训练的视觉-语言编码器处于冻结状态。因此,OTTER保留并利用了从大规模预训练中学到的丰富语义理解能力,实现了强大的零样本泛化性能。在仿真和真实世界实验中,OTTER显著优于现有VLA模型,展现出对新物体和新环境的强大零样本泛化能力。视频、代码、检查点及数据集:https://ottervla.github.io/。