In this study, we address the problem of open-vocabulary mobile manipulation, where a robot is required to carry a wide range of objects to receptacles based on free-form natural language instructions. This task is challenging, as it involves understanding visual semantics and the affordance of manipulation actions. To tackle these challenges, we propose Affordance RAG, a zero-shot hierarchical multimodal retrieval framework that constructs Affordance-Aware Embodied Memory from pre-explored images. The model retrieves candidate targets based on regional and visual semantics and reranks them with affordance scores, allowing the robot to identify manipulation options that are likely to be executable in real-world environments. Our method outperformed existing approaches in retrieval performance for mobile manipulation instruction in large-scale indoor environments. Furthermore, in real-world experiments where the robot performed mobile manipulation in indoor environments based on free-form instructions, the proposed method achieved a task success rate of 85%, outperforming existing methods in both retrieval performance and overall task success.
翻译:本研究针对开放词汇移动操作问题展开研究,该任务要求机器人根据自由形式的自然语言指令将各类物体搬运至指定容器。该任务具有挑战性,因其需要同时理解视觉语义与操作动作的可负担性。为应对这些挑战,我们提出可负担性RAG——一种零样本层次化多模态检索框架,该框架通过预探索图像构建可负担感知具身记忆。该模型基于区域与视觉语义检索候选目标,并通过可负担性评分进行重排序,使机器人能够识别在真实环境中具有高可执行性的操作选项。在大规模室内环境的移动操作指令检索任务中,本方法在检索性能上超越了现有方法。此外,在基于自由形式指令的室内环境真实世界实验中,机器人执行移动操作的任务成功率达到85%,在检索性能和整体任务成功率方面均优于现有方法。