Induction head mechanism is a part of the computational circuits for in-context learning (ICL) that enable large language models (LLMs) to adapt to new tasks without fine-tuning. Most existing work explains the training dynamics behind acquiring such a powerful mechanism. However, the model's ability to coordinate in-context information over long contexts and global knowledge acquired during pretraining remains poorly understood. This paper investigates how a two-layer transformer thoroughly captures in-context information and balances it with pretrained bigram knowledge in next token prediction, from the viewpoint of associative memory. We theoretically analyze the representation of weight matrices in attention layers and the resulting logits when a transformer is given prompts generated by a bigram model. In the experiments, we design specific prompts to evaluate whether the outputs of the trained transformer align with the theoretical results.
翻译:归纳头机制是上下文学习(ICL)计算电路的一部分,它使大型语言模型(LLMs)能够在无需微调的情况下适应新任务。现有研究大多侧重于解释获得这种强大机制背后的训练动态。然而,模型在长上下文中协调上下文信息的能力,以及其在预训练期间获得的全局知识,目前仍缺乏深入理解。本文从联想记忆的视角出发,研究了一个双层Transformer如何全面捕获上下文信息,并在下一词元预测中将其与预训练的二元语法知识相平衡。我们从理论上分析了当Transformer接收由二元语法模型生成的提示时,注意力层中权重矩阵的表示形式及其产生的对数概率。在实验中,我们设计了特定的提示来评估训练后Transformer的输出是否与理论结果一致。