Reinforcement learning (RL) yields substantial improvements in large language models (LLMs) downstream task performance and alignment with human values. Surprisingly, such large gains result from updating only a small subnetwork comprising just 5 percent to 30 percent of the parameters, with the rest effectively unchanged. We refer to this phenomenon as parameter update sparsity induced by RL. It is observed across all 7 widely used RL algorithms (e.g., PPO, GRPO, DPO) and all 10 LLMs from different families in our experiments. This sparsity is intrinsic and occurs without any explicit sparsity promoting regularizations or architectural constraints. Finetuning the subnetwork alone recovers the test accuracy, and, remarkably, produces a model nearly identical to the one obtained via full finetuning. The subnetworks from different random seeds, training data, and even RL algorithms show substantially greater overlap than expected by chance. Our analysis suggests that this sparsity is not due to updating only a subset of layers, instead, nearly all parameter matrices receive similarly sparse updates. Moreover, the updates to almost all parameter matrices are nearly full-rank, suggesting RL updates a small subset of parameters that nevertheless span almost the full subspaces that the parameter matrices can represent. We conjecture that the this update sparsity can be primarily attributed to training on data that is near the policy distribution, techniques that encourage the policy to remain close to the pretrained model, such as the KL regularization and gradient clipping, have limited impact.
翻译:强化学习(RL)能够显著提升大语言模型(LLMs)在下游任务中的性能表现及其与人类价值观的对齐程度。令人惊讶的是,如此显著的性能提升仅通过更新一个仅包含5%至30%参数的小型子网络实现,其余参数基本保持不变。我们将这种现象称为由强化学习诱导的参数更新稀疏性。在我们的实验中,这种现象在所有7种广泛使用的强化学习算法(例如PPO、GRPO、DPO)以及所有10个来自不同家族的大语言模型中均被观察到。这种稀疏性是内在的,且无需任何显式的稀疏性促进正则化或架构约束。仅微调该子网络即可恢复测试精度,并且值得注意的是,能产生一个与通过全参数微调获得的模型几乎相同的模型。来自不同随机种子、训练数据甚至不同强化学习算法的子网络,其重叠程度远高于随机预期。我们的分析表明,这种稀疏性并非源于仅更新部分层,相反,几乎所有参数矩阵都接收了类似程度的稀疏更新。此外,对几乎所有参数矩阵的更新都近乎满秩,这表明强化学习更新了一小部分参数,但这些参数却几乎跨越了参数矩阵所能表示的完整子空间。我们推测,这种更新稀疏性主要可归因于在接近策略分布的数据上进行训练,而旨在鼓励策略保持接近预训练模型的技术(如KL正则化和梯度裁剪)影响有限。