Although LLMs exhibit strong reasoning capabilities, existing training methods largely depend on outcome-based feedback, which can produce correct answers with flawed reasoning. Prior work introduces supervision on intermediate steps but still lacks guarantees of logical soundness, which is crucial in high-stakes scenarios where logical consistency is paramount. To address this, we propose LogicReward, a novel reward system that guides model training by enforcing step-level logical correctness with a theorem prover. We further introduce Autoformalization with Soft Unification, which reduces natural language ambiguity and improves formalization quality, enabling more effective use of the theorem prover. An 8B model trained on data constructed with LogicReward surpasses GPT-4o and o4-mini by 11.6\% and 2\% on natural language inference and logical reasoning tasks with simple training procedures. Further analysis shows that LogicReward enhances reasoning faithfulness, improves generalizability to unseen tasks such as math and commonsense reasoning, and provides a reliable reward signal even without ground-truth labels. We will release all data and code at https://llm-symbol.github.io/LogicReward.
翻译:尽管大语言模型展现出强大的推理能力,但现有的训练方法主要依赖于基于结果的反馈,这可能导致模型产生正确答案却伴随有缺陷的推理过程。先前的研究引入了对中间步骤的监督,但仍无法保证逻辑上的严密性,而在逻辑一致性至关重要的高风险场景中,这一点尤为关键。为解决此问题,我们提出了LogicReward,一种新颖的奖励系统,它通过定理证明器强制执行步骤级的逻辑正确性来指导模型训练。我们进一步引入了基于软统一的自动形式化方法,该方法减少了自然语言的歧义并提高了形式化质量,从而能够更有效地利用定理证明器。使用LogicReward构建的数据训练的8B参数模型,在自然语言推理和逻辑推理任务上,以简单的训练流程分别超越了GPT-4o和o4-mini模型11.6%和2%。进一步的分析表明,LogicReward增强了推理的忠实性,提高了对未见任务(如数学和常识推理)的泛化能力,并且即使在缺乏真实标签的情况下也能提供可靠的奖励信号。我们将在https://llm-symbol.github.io/LogicReward发布所有数据和代码。