Reasoning ability has become a defining capability of Large Language Models (LLMs), with Reinforcement Learning with Verifiable Rewards (RLVR) emerging as a key paradigm to enhance it. However, RLVR training often suffers from policy entropy collapse, where the policy becomes overly deterministic, hindering exploration and limiting reasoning performance. While entropy regularization is a common remedy, its effectiveness is highly sensitive to the fixed coefficient, making it unstable across tasks and models. In this work, we revisit entropy regularization in RLVR and argue that its potential has been largely underestimated. Our analysis shows that (i) tasks of varying difficulty demand distinct exploration intensities, and (ii) balanced exploration may require the policy entropy to be maintained within a moderate range below its initial level. Therefore, we propose Adaptive Entropy Regularization (AER)--a framework that dynamically balances exploration and exploitation via three components: difficulty-aware coefficient allocation, initial-anchored target entropy, and dynamic global coefficient adjustment. Experiments on multiple mathematical reasoning benchmarks show that AER consistently outperforms baselines, improving both reasoning accuracy and exploration capability.
翻译:推理能力已成为大型语言模型(LLMs)的关键特性,而基于可验证奖励的强化学习(RLVR)正成为提升该能力的重要范式。然而,RLVR训练常面临策略熵崩溃问题,即策略变得过度确定性,从而阻碍探索并限制推理性能。尽管熵正则化是常用解决方案,但其效果对固定系数极为敏感,导致在不同任务和模型间表现不稳定。本研究重新审视RLVR中的熵正则化,论证其潜力长期被低估。我们的分析表明:(i)不同难度任务需要差异化的探索强度;(ii)平衡的探索可能需要将策略熵维持在略低于初始水平的中等范围内。为此,我们提出自适应熵正则化(AER)框架——通过难度感知系数分配、初始锚定目标熵和动态全局系数调整三个组件,实现探索与利用的动态平衡。在多个数学推理基准测试上的实验表明,AER持续优于基线方法,同步提升了推理准确性与探索能力。