Large reasoning models exhibit long chain-of-thought reasoning with strategies such as backtracking and self-correction, though recent studies suggest that these abilities typically require additional training. We first investigate whether such behaviors can be elicited without any training. To this end, we propose a decoding-time approach, ThinkLogit, which utilizes logit arithmetic to tune a target large non-reasoning model for long reasoning using a substantially smaller reasoning model as the guider. We then show that we can further boost its performance by training the guider model with preference optimization over correct/incorrect reasoning pairs sampled from both the target and guider model, a setup we refer to as ThinkLogit-DPO. Our experiments demonstrate that ThinkLogit and ThinkLogit-DPO achieve a relative improvement in average accuracy by 24.5% and 29.1%, respectively, over five reasoning benchmarks using the Qwen2.5-32B guided by R1-Distill-Qwen-1.5B, a model 21x smaller. Moreover, we find that ThinkLogit remains effective when the guider and target come from different model families. It is also orthogonal to post-training methods for small models, as guiders improved through supervised distillation or reinforcement learning can be directly plugged in to yield stronger large models, offering a practical path to unlock long reasoning in large-scale models without costly post-training.
翻译:大型推理模型展现出包含回溯与自我修正等策略的长链思维推理能力,然而近期研究表明这些能力通常需要额外训练。我们首先探究此类行为是否能在无需任何训练的情况下被激发。为此,我们提出一种解码时方法ThinkLogit,该方法利用逻辑运算,通过一个显著更小的推理模型作为引导器,对目标大型非推理模型进行长链推理调优。我们进一步证明,通过对引导模型在从目标模型与引导模型采样的正确/错误推理样本对上进行偏好优化训练(我们将此设置称为ThinkLogit-DPO),可以进一步提升其性能。实验表明,在使用比目标模型小21倍的R1-Distill-Qwen-1.5B引导Qwen2.5-32B的情况下,ThinkLogit与ThinkLogit-DPO在五项推理基准测试上的平均准确率分别实现了24.5%和29.1%的相对提升。此外,我们发现当引导器与目标模型来自不同模型家族时,ThinkLogit依然有效。该方法也与小型模型的后训练方法正交——通过监督蒸馏或强化学习改进的引导器可直接接入以构建更强大的大型模型,这为无需昂贵后训练即可解锁大规模模型的长链推理能力提供了一条实用路径。