Large language models (LLMs) remain broadly open and highly steerable: they imitate at scale, accept arbitrary system prompts, and readily adopt multiple personae. By analogy to human development, we hypothesize that progress toward artificial general intelligence (AGI) involves a lock-in phase: a transition from open imitation to identity consolidation, in which goal structures, refusals, preferences, and internal representations become comparatively stable and resistant to external steering. We formalize this phase, link it to known phenomena in learning dynamics, and propose operational metrics for onset detection. Experimentally, we demonstrate that while the behavioral consolidation is rapid and non-linear, its side-effects on general capabilities are not monolithic. Our results reveal a spectrum of outcomes--from performance trade-offs in small models, through largely cost-free adoption in mid-scale models, to transient instabilities in large, quantized models. We argue that such consolidation is a prerequisite for AGI-level reliability and also a critical control point for safety: identities can be deliberately engineered for reliability, yet may also emerge spontaneously during scaling, potentially hardening unpredictable goals and behaviors.
翻译:大型语言模型(LLMs)目前仍普遍保持开放性和高度可引导性:它们能够进行大规模模仿、接受任意系统提示,并轻易采纳多种角色。类比人类发展过程,我们提出假说认为:通向通用人工智能(AGI)的进展需经历一个锁定阶段——即从开放模仿转向身份整合的过渡期。在此阶段,目标结构、拒绝机制、偏好及内部表征将变得相对稳定,并能抵抗外部引导。我们对此阶段进行了形式化描述,将其与学习动力学中的已知现象相关联,并提出了用于检测起始点的操作化指标。实验表明,虽然行为整合过程呈现快速非线性特征,但其对通用能力产生的副作用并非单一模式。我们的研究结果揭示了一系列结果谱系:从小型模型的性能权衡,到中等规模模型近乎无代价的身份采纳,再到大型量化模型中出现的瞬态不稳定性。我们认为,此类身份整合是实现AGI级可靠性的先决条件,同时也是安全性的关键控制点:身份既可通过设计实现可靠性,也可能在模型扩展过程中自发形成,这可能导致难以预测的目标与行为固化为稳定状态。