This document is a follow-up to our previous paper dedicated to a vectorized derivation of backpropagation in CNNs. Following the same principles and notations already put in place there, we now focus on transformer-based next-token-prediction architectures. To this end, we apply our lightweight index-free methodology to new types of layers such as embedding, multi-headed self-attention and layer normalization. In addition, we also provide gradient expressions for LoRA layers to illustrate parameter-efficient fine-tuning. Why bother doing manual backpropagation when there are so many tools that do this automatically? Any gap in understanding of how values propagate forward will become evident when attempting to differentiate the loss function. By working through the backward pass manually, we gain a deeper intuition for how each operation influences the final output. A complete PyTorch implementation of a minimalistic GPT-like network is also provided along with analytical expressions for of all of its gradient updates.
翻译:本文是我们先前致力于卷积神经网络反向传播向量化推导论文的后续研究。沿用先前建立的相同原理与符号体系,我们现聚焦于基于Transformer的下一词元预测架构。为此,我们将轻量级无索引方法应用于新型层结构,例如嵌入层、多头自注意力机制与层归一化。此外,我们还提供了LoRA层的梯度表达式以阐释参数高效微调过程。既然存在众多可实现自动反向传播的工具,为何仍需进行手动推导?当尝试对损失函数进行微分时,任何关于数值前向传播机制的理解空白都将显现。通过手动推演反向传播过程,我们能更深入地理解每个运算如何影响最终输出。本文同时提供了完整的最小化类GPT网络的PyTorch实现及其所有梯度更新的解析表达式。