Symbol grounding (Harnad, 1990) describes how symbols such as words acquire their meanings by connecting to real-world sensorimotor experiences. Recent work has shown preliminary evidence that grounding may emerge in (vision-)language models trained at scale without using explicit grounding objectives. Yet, the specific loci of this emergence and the mechanisms that drive it remain largely unexplored. To address this problem, we introduce a controlled evaluation framework that systematically traces how symbol grounding arises within the internal computations through mechanistic and causal analysis. Our findings show that grounding concentrates in middle-layer computations and is implemented through the aggregate mechanism, where attention heads aggregate the environmental ground to support the prediction of linguistic forms. This phenomenon replicates in multimodal dialogue and across architectures (Transformers and state-space models), but not in unidirectional LSTMs. Our results provide behavioral and mechanistic evidence that symbol grounding can emerge in language models, with practical implications for predicting and potentially controlling the reliability of generation.
翻译:符号接地(Harnad, 1990)描述了符号(如词语)如何通过与真实世界的感知运动经验相连接而获得意义。近期研究表明,在没有使用显式接地目标的大规模训练的(视觉-)语言模型中,接地现象可能初步涌现。然而,这种涌现的具体发生位置及其驱动机制在很大程度上仍未得到探索。为解决这一问题,我们引入了一个受控评估框架,通过机制性与因果性分析,系统地追踪符号接地如何在内部计算过程中产生。我们的研究结果表明,接地现象集中于中间层计算,并通过聚合机制实现,其中注意力头聚合环境基础以支持语言形式的预测。这一现象在多模态对话和不同架构(Transformer 与状态空间模型)中均能复现,但在单向 LSTM 中则未出现。我们的结果为语言模型中能够涌现符号接地提供了行为与机制证据,并对预测并潜在控制生成内容的可靠性具有实际意义。