Scaling vision-language-action (VLA) model pre-training requires large volumes of diverse, high-quality manipulation trajectories. Most current data is obtained via human teleoperation, which is expensive and difficult to scale. Reinforcement learning (RL) methods learn useful skills through autonomous exploration, making them a viable approach for generating data. However, standard RL training collapses to a narrow execution pattern, limiting its utility for large-scale pre-training. We propose Discover, Lea rn and Reinforce (DLR), an information-theoretic pattern discovery framework that generates multiple distinct, high-success behavioral patterns for VLA pretraining. Empirically, DLR generates a markedly more diverse trajectory corpus on LIBERO. Specifically, it learns multiple distinct, high-success strategies for the same task where standard RL discovers only one, and hence it covers substantially broader regions of the state-action space. When adapted to unseen downstream task suites, VLA models pretrained on our diverse RL data surpass counterparts trained on equal-sized standard RL datasets. Moreover, DLR exhibits positive data-scaling behavior that single-pattern RL lacks. These results position multi-pattern RL as a practical, scalable data engine for embodied foundation models.
翻译:扩展视觉-语言-动作(VLA)模型的预训练需要大量多样化、高质量的操控轨迹数据。当前多数数据通过人类遥操作获取,成本高昂且难以规模化。强化学习(RL)方法通过自主探索学习有效技能,成为生成数据的可行途径。然而,标准强化学习训练会收敛至单一执行模式,限制了其在大规模预训练中的应用。我们提出“发现、学习与强化”(DLR)——一种基于信息论的模式发现框架,能为VLA预训练生成多种独特且高成功率的行为模式。实验表明,DLR在LIBERO环境中生成的轨迹库多样性显著提升:针对同一任务,它能学习多种不同的高成功率策略,而标准强化学习仅能发现一种模式,因此覆盖了更广阔的状态-动作空间区域。当适配到未见过的下游任务组时,基于我们多样化强化学习数据预训练的VLA模型,其性能优于使用等量标准强化学习数据集训练的对照模型。此外,DLR展现出单一模式强化学习所缺乏的正向数据扩展特性。这些结果表明,多模式强化学习可作为具身基础模型实用且可扩展的数据引擎。