Consider the following instance of the Offline Meta Reinforcement Learning (OMRL) problem: given the complete training logs of $N$ conventional RL agents, trained on $N$ different tasks, design a meta-agent that can quickly maximize reward in a new, unseen task from the same task distribution. In particular, while each conventional RL agent explored and exploited its own different task, the meta-agent must identify regularities in the data that lead to effective exploration/exploitation in the unseen task. Here, we take a Bayesian RL (BRL) view, and seek to learn a Bayes-optimal policy from the offline data. Building on the recent VariBAD BRL approach, we develop an off-policy BRL method that learns to plan an exploration strategy based on an adaptive neural belief estimate. However, learning to infer such a belief from offline data brings a new identifiability issue we term MDP ambiguity. We characterize the problem, and suggest resolutions via data collection and modification procedures. Finally, we evaluate our framework on a diverse set of domains, including difficult sparse reward tasks, and demonstrate learning of effective exploration behavior that is qualitatively different from the exploration used by any RL agent in the data.
翻译:考虑一下离线元加强学习(OMRL)问题的以下实例:鉴于对常规RL代理机构进行了完整的培训日志(N$美元),他们接受了不同任务的培训,设计了一个元试剂,能够在同一任务分布的新的、隐蔽的任务中迅速获得最大奖赏。特别是,当每个常规RL代理机构探索并利用其自己的不同任务时,该元试剂必须确定导致在无形任务中进行有效探索/开发的数据的规律性。在这里,我们从Bayesian RL(BRL)的角度来审视问题,并寻求从离线数据中学习一个最佳的贝亚政策。我们在最近的VariBAD BRL方法基础上,设计了一个非政策性BRL方法,可以学习根据适应性神经信仰估计来规划勘探战略。然而,从离线数据中推断出这种信念会带来一个新的识别性问题,我们称之为MDP模糊性。我们用数据收集和修改程序来描述问题,建议解决办法。最后,我们评估了我们关于不同领域框架,包括困难的微量奖励任务,并展示了在勘探活动中所使用的有效探索行为的任何性质数据。