Multimodal Large Language Models (MLLMs) show promising results as decision-making engines for embodied agents operating in complex, physical environments. However, existing benchmarks often prioritize high-level planning or spatial reasoning, leaving the fine-grained action intelligence required for embodied physical interaction underexplored. To address this gap, we introduce CFG-Bench, a new benchmark designed to systematically evaluate this crucial capability. CFG-Bench consists of 1,368 curated videos paired with 19,562 three-modalities question-answer pairs targeting four cognitive abilities: 1) Physical Interaction, 2) Temporal-Causal Relation, 3) Intentional Understanding, and 4) Evaluative Judgment. Together, these dimensions provide a systematic framework for assessing a model's ability to translate visual observations into actionable knowledge, moving beyond mere surface-level recognition. Our comprehensive evaluation on CFG-Bench reveals that leading MLLMs struggle to produce detailed instructions for physical interactions and exhibit profound limitations in the higher-order reasoning of intention and evaluation. Moreover, supervised fine-tuning (SFT) on our data demonstrates that teaching an MLLMs to articulate fine-grained actions directly translates to significant performance gains on established embodied benchmarks. Our analysis highlights these limitations and offers insights for developing more capable and grounded embodied agents. Project page: \href{https://cfg-bench.github.io/}{https://cfg-bench.github.io/}.
翻译:多模态大语言模型(MLLMs)作为决策引擎,在复杂物理环境中运行的具身智能体方面展现出潜力。然而,现有基准测试通常优先考虑高层规划或空间推理,而对具身物理交互所需的细粒度动作智能探索不足。为填补这一空白,我们提出了CFG-Bench,这是一个旨在系统评估这一关键能力的新基准。CFG-Bench包含1,368个精选视频,并配有19,562个三模态问答对,针对四种认知能力:1)物理交互,2)时序因果关联,3)意图理解,以及4)评估判断。这些维度共同构成了一个系统框架,用于评估模型将视觉观察转化为可操作知识的能力,超越了仅停留在表层识别的层面。我们在CFG-Bench上的全面评估显示,领先的MLLMs难以生成物理交互的详细指令,并在意图和评估的高阶推理方面表现出显著局限性。此外,基于我们数据的监督微调(SFT)表明,教导MLLMs表达细粒度动作可直接转化为在现有具身基准测试上的显著性能提升。我们的分析揭示了这些局限性,并为开发更强大、更接地气的具身智能体提供了见解。项目页面:\\href{https://cfg-bench.github.io/}{https://cfg-bench.github.io/}。