Modern large reasoning models demonstrate impressive problem-solving capabilities by employing sophisticated reasoning strategies. However, they often struggle to balance efficiency and effectiveness, frequently generating unnecessarily lengthy reasoning chains for simple problems. In this work, we propose AdaCtrl, a novel framework to support both difficulty-aware adaptive reasoning budget allocation and explicit user control over reasoning depth. AdaCtrl dynamically adjusts its reasoning length based on self-assessed problem difficulty, while also allowing users to manually control the budget to prioritize either efficiency or effectiveness. This is achieved through a two-stage training pipeline: an initial cold-start fine-tuning phase to instill the ability to self-aware difficulty and adjust reasoning budget, followed by a difficulty-aware reinforcement learning (RL) stage that refines the model's adaptive reasoning strategies and calibrates its difficulty assessments based on its evolving capabilities during online training. To enable intuitive user interaction, we design explicit length-triggered tags that function as a natural interface for budget control. Empirical results show that AdaCtrl adapts reasoning length based on estimated difficulty, compared to the standard training baseline that also incorporates fine-tuning and RL, it yields performance improvements and simultaneously reduces response length by 10.06% and 12.14% on the more challenging AIME2024 and AIME2025 datasets, which require elaborate reasoning, and by 62.05% and 91.04% on the MATH500 and GSM8K datasets, where more concise responses are sufficient. Furthermore, AdaCtrl enables precise user control over the reasoning budget, allowing for tailored responses to meet specific needs.
翻译:现代大型推理模型通过采用复杂的推理策略展现出令人印象深刻的问题解决能力。然而,它们往往难以平衡效率与效果,经常为简单问题生成不必要的冗长推理链。本文提出AdaCtrl,一种新颖的框架,旨在同时支持难度感知的自适应推理预算分配和用户对推理深度的显式控制。AdaCtrl基于自评估的问题难度动态调整其推理长度,同时也允许用户手动控制预算以优先考虑效率或效果。这是通过一个两阶段训练流程实现的:初始的冷启动微调阶段,用于灌输模型自我感知难度并调整推理预算的能力;随后是一个难度感知的强化学习(RL)阶段,该阶段在在线训练过程中根据模型不断演进的能力,优化其自适应推理策略并校准其难度评估。为了实现直观的用户交互,我们设计了显式的长度触发标签,作为预算控制的自然接口。实证结果表明,与同样结合了微调和RL的标准训练基线相比,AdaCtrl能够根据估计的难度自适应调整推理长度,在需要精细推理、更具挑战性的AIME2024和AIME2025数据集上,性能得到提升的同时,响应长度分别减少了10.06%和12.14%;而在更简洁响应即已足够的MATH500和GSM8K数据集上,响应长度分别减少了62.05%和91.04%。此外,AdaCtrl使用户能够精确控制推理预算,从而生成满足特定需求的定制化响应。