Self-improving systems require environmental interaction for continuous adaptation. We introduce SPICE (Self-Play In Corpus Environments), a reinforcement learning framework where a single model acts in two roles: a Challenger that mines documents from a large corpus to generate diverse reasoning tasks, and a Reasoner that solves them. Through adversarial dynamics, the Challenger creates an automatic curriculum at the frontier of the Reasoner's capability, while corpus grounding provides the rich, near-inexhaustible external signal necessary for sustained improvement. Unlike existing ungrounded self-play methods that offer more limited benefits, SPICE achieves consistent gains across mathematical (+8.9%) and general reasoning (+9.8%) benchmarks on multiple model families. Our analysis reveals how document grounding is a key ingredient in SPICE to continuously generate its own increasingly challenging goals and achieve them, enabling sustained self-improvement.
翻译:自我改进系统需要环境交互以实现持续适应。我们提出SPICE(语料库环境中的自我博弈),一种强化学习框架,其中单一模型扮演两个角色:挑战者从大型语料库中挖掘文档以生成多样化的推理任务,以及推理者解决这些任务。通过对抗性动态,挑战者在推理者能力边界上创建自动课程,而语料库基础提供了持续改进所需的丰富、近乎无穷的外部信号。与现有未基于语料库的自我博弈方法相比,SPICE在多个模型家族中实现了数学推理(+8.9%)和通用推理(+9.8%)基准的持续提升。我们的分析揭示了文档基础是SPICE持续生成并实现日益挑战性目标的关键要素,从而实现持续的自我改进。