Reasoning models are large language models that emit a long chain-of-thought before answering, providing both higher accuracy and explicit reasoning for their response. A major question has been whether language model reasoning generalizes beyond mathematics, programming, and logic, where most previous work has focused. We demonstrate that reasoning models can be post-trained for chemistry without additional domain pretraining, and require substantially less data compared to contemporary domain-specific models. We report ether0, a 24B parameter LLM (based on Mistral-Small-24B) that can reason in natural language and respond with chemical structures. This reasoning model was trained with reinforcement learning on 640,730 experimentally-grounded chemistry problems across 375 tasks ranging from synthesizability, to blood-brain barrier permeability, to human receptor activity, to scent. Our model exceeds general-purpose chemistry models, frontier models, and human experts on molecular design tasks. It is also more data efficient relative to specialized models. We anticipate that this method can be applied to train data-efficient language models specialized for tasks across a wide variety of scientific domains.
翻译:推理模型是一种在回答前生成长链思维过程的大语言模型,既能提供更高准确性,又能为回答提供显式推理依据。一个核心问题在于语言模型的推理能力是否能够推广到数学、编程和逻辑之外——此前大多数研究都集中在这三个领域。我们证明,无需额外领域预训练即可对推理模型进行化学领域的后训练,且所需数据量远少于当前领域专用模型。我们报告了ether0——一个基于Mistral-Small-24B的240亿参数大语言模型,能够用自然语言进行推理并以化学结构式作答。该推理模型通过强化学习在640,730个基于实验的化学问题上训练完成,涵盖375项任务,范围从合成可行性、血脑屏障渗透性到人类受体活性乃至气味识别。我们的模型在分子设计任务上超越了通用化学模型、前沿模型及人类专家,同时相对于专用模型具有更高的数据效率。我们预期该方法可广泛应用于训练面向各类科学领域任务的数据高效型专用语言模型。