Offline methods for reinforcement learning have a potential to help bridge the gap between reinforcement learning research and real-world applications. They make it possible to learn policies from offline datasets, thus overcoming concerns associated with online data collection in the real-world, including cost, safety, or ethical concerns. In this paper, we propose a benchmark called RL Unplugged to evaluate and compare offline RL methods. RL Unplugged includes data from a diverse range of domains including games ({\em e.g.,} Atari benchmark) and simulated motor control problems ({\em e.g.,} DM Control Suite). The datasets include domains that are partially or fully observable, use continuous or discrete actions, and have stochastic vs. deterministic dynamics. We propose detailed evaluation protocols for each domain in RL Unplugged and provide an extensive analysis of supervised learning and offline RL methods using these protocols. We will release data for all our tasks and open-source all algorithms presented in this paper. We hope that our suite of benchmarks will increase the reproducibility of experiments and make it possible to study challenging tasks with a limited computational budget, thus making RL research both more systematic and more accessible across the community. Moving forward, we view RL Unplugged as a living benchmark suite that will evolve and grow with datasets contributed by the research community and ourselves. Our project page is available on https://git.io/JJUhd.
翻译:强化学习的离线方法有可能帮助弥合强化学习研究与现实世界应用之间的差距。它们使得有可能从离线数据集中学习政策,从而克服与现实世界在线数据收集相关的关切,包括成本、安全或道德关切。在本文件中,我们提议了一个称为RL Unpletted的基准,以评价和比较离线RL方法。RL Unpluted包含来自不同领域的数据,包括游戏(例如)Atari基准)和模拟运动控制问题(例如,DMJ控制套件)。数据集包括部分或全部可观测的领域,使用连续或离散的行动,并具有确定性动态的系统化。我们提议了一个称为RL Unpluted 的详细评价协议,并用这些协议对受监督的学习和离线RL方法进行广泛分析。我们将发布我们所有任务的数据,并公开来源本文中介绍的所有算法。我们希望我们的一系列基准将增加活的实验的可复制性,并使得我们有可能以更具挑战性的方式对社区进行研究,从而以更具挑战性的方式对社区进行前瞻性的模型进行系统化的研究。