As reasoning models scale rapidly, the essential role of multimodality in human cognition has come into sharp relief, driving a growing need to probe vision-centric cognitive behaviors. Yet, existing multimodal benchmarks either overemphasize textual reasoning or fall short of systematically capturing vision-centric cognitive behaviors, leaving the cognitive capacity of MLLMs insufficiently assessed. To address this limitation, we introduce MME-CC (Multi-Modal Evaluation benchmark of Cognitive Capacity), a vision-grounded benchmark that organizes 11 representative reasoning tasks into three fundamental categories of visual information: spatial, geometric, and knowledge-based reasoning, and provides fine-grained analyses of MLLMs' cognitive capacity across these dimensions. Based on MME-CC, we conduct extensive experiments over 16 representative MLLMs. Our study reveals that closed-source models currently lead overall (e.g., 42.66 for Gemini-2.5-Pro vs. 30.45 for GLM-4.5V), while spatial and geometric reasoning remain broadly weak (less than or equal to 30%). We further identify common error patterns, including orientation mistakes, fragile cross-view identity persistence, and poor adherence to counterfactual instructions, and observe that Chain-of-Thought typically follows a three-stage process (extract -> reason -> verify) with heavy reliance on visual extraction. We hope this work catalyzes a shift toward treating the cognitive capacity of MLLMs as central to both evaluation and model design.
翻译:随着推理模型规模的迅速扩大,多模态在人类认知中的关键作用日益凸显,这推动了对视觉中心认知行为进行深入探究的迫切需求。然而,现有的多模态基准要么过度强调文本推理,要么未能系统性地捕捉视觉中心的认知行为,导致对多模态大语言模型(MLLMs)认知能力的评估不足。为弥补这一缺陷,我们提出了MME-CC(认知能力多模态评估基准),这是一个基于视觉的基准,将11个代表性推理任务组织为视觉信息的三个基本类别:空间推理、几何推理和基于知识的推理,并提供了MLLMs在这些维度上认知能力的细粒度分析。基于MME-CC,我们对16个代表性的MLLMs进行了广泛的实验。我们的研究表明,闭源模型目前在整体上领先(例如,Gemini-2.5-Pro得分为42.66,而GLM-4.5V为30.45),而空间和几何推理能力普遍较弱(小于或等于30%)。我们进一步识别了常见的错误模式,包括方向判断错误、跨视角身份一致性脆弱以及对抗事实指令的遵循能力差,并观察到思维链(Chain-of-Thought)通常遵循一个三阶段过程(提取 -> 推理 -> 验证),且高度依赖视觉信息提取。我们希望这项工作能够推动将MLLMs的认知能力视为评估和模型设计的核心。