As LLMs shift toward autonomous agents, Deep Research has emerged as a pivotal metric. However, existing academic benchmarks like BrowseComp often fail to meet real-world demands for open-ended research, which requires robust skills in intent recognition, long-horizon decision-making, and cross-source verification. To address this, we introduce Step-DeepResearch, a cost-effective, end-to-end agent. We propose a Data Synthesis Strategy Based on Atomic Capabilities to reinforce planning and report writing, combined with a progressive training path from agentic mid-training to SFT and RL. Enhanced by a Checklist-style Judger, this approach significantly improves robustness. Furthermore, to bridge the evaluation gap in the Chinese domain, we establish ADR-Bench for realistic deep research scenarios. Experimental results show that Step-DeepResearch (32B) scores 61.4% on Scale AI Research Rubrics. On ADR-Bench, it significantly outperforms comparable models and rivals SOTA closed-source models like OpenAI and Gemini DeepResearch. These findings prove that refined training enables medium-sized models to achieve expert-level capabilities at industry-leading cost-efficiency.
翻译:随着大语言模型向自主智能体演进,深度研究已成为一项关键衡量指标。然而,现有的学术基准(如BrowseComp)往往无法满足现实世界对开放式研究的需求,这类研究需要强大的意图识别、长程决策和跨源验证能力。为此,我们推出了Step-DeepResearch,一个高性价比的端到端智能体。我们提出了一种基于原子能力的数据合成策略,以强化规划与报告撰写能力,并结合了从智能体中期训练到监督微调与强化学习的渐进式训练路径。该方法通过清单式评判器的增强,显著提升了鲁棒性。此外,为弥补中文领域评估的空白,我们建立了面向真实深度研究场景的ADR-Bench。实验结果表明,Step-DeepResearch(32B)在Scale AI研究量规上获得了61.4%的得分。在ADR-Bench上,其表现显著优于同类可比模型,并与OpenAI和Gemini DeepResearch等闭源前沿模型相媲美。这些发现证明,经过精细化的训练,中等规模模型能够以行业领先的性价比实现专家级能力。