Memory Mosaics [Zhang et al., 2025], networks of associative memories, have demonstrated appealing compositional and in-context learning capabilities on medium-scale networks (GPT-2 scale) and synthetic small datasets. This work shows that these favorable properties remain when we scale memory mosaics to large language model sizes (llama-8B scale) and real-world datasets. To this end, we scale memory mosaics to 10B size, we train them on one trillion tokens, we introduce a couple architectural modifications ("Memory Mosaics v2"), we assess their capabilities across three evaluation dimensions: training-knowledge storage, new-knowledge storage, and in-context learning. Throughout the evaluation, memory mosaics v2 match transformers on the learning of training knowledge (first dimension) and significantly outperforms transformers on carrying out new tasks at inference time (second and third dimensions). These improvements cannot be easily replicated by simply increasing the training data for transformers. A memory mosaics v2 trained on one trillion tokens still perform better on these tasks than a transformer trained on eight trillion tokens.
翻译:记忆马赛克 [Zhang et al., 2025] 作为一种关联记忆网络,已在中等规模网络(GPT-2 级别)和合成小数据集上展现出优异的组合性与上下文学习能力。本研究表明,当将记忆马赛克扩展至大语言模型规模(llama-8B 级别)并应用于真实世界数据集时,这些优良特性依然得以保持。为此,我们将记忆马赛克扩展至 100 亿参数规模,基于一万亿标记进行训练,并引入若干架构改进(“记忆马赛克 v2”)。我们从三个评估维度系统考察其性能:训练知识存储、新知识存储及上下文学习。评估结果表明,记忆马赛克 v2 在训练知识学习(第一维度)方面与 Transformer 模型持平,而在推理阶段执行新任务(第二、三维度)方面显著优于 Transformer。这些改进无法通过单纯增加 Transformer 训练数据轻易复现:基于一万亿标记训练的记忆马赛克 v2 在这些任务上的表现,仍优于基于八万亿标记训练的 Transformer 模型。