The Arcade Learning Environment (ALE) is proposed as an evaluation platform for empirically assessing the generality of agents across dozens of Atari 2600 games. ALE offers various challenging problems and has drawn significant attention from the deep reinforcement learning (RL) community. From Deep Q-Networks (DQN) to Agent57, RL agents seem to achieve superhuman performance in ALE. However, is this the case? In this paper, to explore this problem, we first review the current evaluation metrics in the Atari benchmarks and then reveal that the current evaluation criteria of achieving superhuman performance are inappropriate, which underestimated the human performance relative to what is possible. To handle those problems and promote the development of RL research, we propose a novel Atari benchmark based on human world records (HWR), which puts forward higher requirements for RL agents on both final performance and learning efficiency. Furthermore, we summarize the state-of-the-art (SOTA) methods in Atari benchmarks and provide benchmark results over new evaluation metrics based on human world records. We concluded that at least four open challenges hinder RL agents from achieving superhuman performance from those new benchmark results. Finally, we also discuss some promising ways to handle those problems.
翻译:阿尔卡德学习环境(Arcade Learning Environment (ALE) ) 是一个评估平台,用于对Atari 2600 游戏中数十个物剂的通用性进行实证评估。 ALE 提供了各种挑战性问题,并吸引了深入强化学习(RL)社区的极大关注。 从深Q-Networks(DQQN)到Agri57, RL代理似乎在ALE 中取得了超人业绩。 然而,情况是这样吗? 为了探讨这一问题,我们首先审查Atari 基准中目前的评价标准,然后发现目前实现超人性业绩的评价标准是不适当的,这低估了人类业绩相对于可能实现的程度。为了处理这些问题并促进RL研究的发展,我们建议根据人类世界记录(HRWR)编制新的Atari基准,提出对Ratri基准(SOTA)的更高要求。 此外,我们总结了阿塔里基准中的最新方法,并根据人类世界新评价指标提供基准结果的基准结果。我们得出结论,至少存在四个公开的挑战,妨碍RL代理实现超人业绩。</s>