We address the problem of model selection for the finite horizon episodic Reinforcement Learning (RL) problem where the transition kernel $P^*$ belongs to a family of models $\mathcal{P}^*$ with finite metric entropy. In the model selection framework, instead of $\mathcal{P}^*$, we are given $M$ nested families of transition kernels $\cP_1 \subset \cP_2 \subset \ldots \subset \cP_M$. We propose and analyze a novel algorithm, namely \emph{Adaptive Reinforcement Learning (General)} (\texttt{ARL-GEN}) that adapts to the smallest such family where the true transition kernel $P^*$ lies. \texttt{ARL-GEN} uses the Upper Confidence Reinforcement Learning (\texttt{UCRL}) algorithm with value targeted regression as a blackbox and puts a model selection module at the beginning of each epoch. Under a mild separability assumption on the model classes, we show that \texttt{ARL-GEN} obtains a regret of $\Tilde{\mathcal{O}}(d_{\mathcal{E}}^*H^2+\sqrt{d_{\mathcal{E}}^* \mathbb{M}^* H^2 T})$, with high probability, where $H$ is the horizon length, $T$ is the total number of steps, $d_{\mathcal{E}}^*$ is the Eluder dimension and $\mathbb{M}^*$ is the metric entropy corresponding to $\mathcal{P}^*$. Note that this regret scaling matches that of an oracle that knows $\mathcal{P}^*$ in advance. We show that the cost of model selection for \texttt{ARL-GEN} is an additive term in the regret having a weak dependence on $T$. Subsequently, we remove the separability assumption and consider the setup of linear mixture MDPs, where the transition kernel $P^*$ has a linear function approximation. With this low rank structure, we propose novel adaptive algorithms for model selection, and obtain (order-wise) regret identical to that of an oracle with knowledge of the true model class.


翻译:在模型选择框架里,我们得到的是一个最小的过渡核心 $\ c>1\\ subset\ \ c2\ subset\ pcrotots = droad= m$。 我们提议并分析一个新的算法, 即 $\ mathcal{ P { 美元 = 美元 = 美元 = 美元 = = 美元 = 美元 = = 美元 = = 美元 = = 美元 = = 美元 = = 美元 = = 美元 = = 美元 = = = = 美元 = = = = 美元 = = = = = 美元 = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

0
下载
关闭预览

相关内容

最新【深度生成模型】Deep Generative Models,104页ppt
专知会员服务
67+阅读 · 2020年10月24日
【强化学习资源集合】Awesome Reinforcement Learning
专知会员服务
93+阅读 · 2019年12月23日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
57+阅读 · 2019年10月17日
机器学习入门的经验与建议
专知会员服务
90+阅读 · 2019年10月10日
GAN新书《生成式深度学习》,Generative Deep Learning,379页pdf
专知会员服务
196+阅读 · 2019年9月30日
强化学习扫盲贴:从Q-learning到DQN
夕小瑶的卖萌屋
52+阅读 · 2019年10月13日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
26+阅读 · 2019年5月18日
meta learning 17年:MAML SNAIL
CreateAMind
11+阅读 · 2019年1月2日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
Reinforcement Learning: An Introduction 2018第二版 500页
CreateAMind
11+阅读 · 2018年4月27日
论文浅尝 | Reinforcement Learning for Relation Classification
开放知识图谱
9+阅读 · 2017年12月10日
强化学习族谱
CreateAMind
26+阅读 · 2017年8月2日
强化学习 cartpole_a3c
CreateAMind
9+阅读 · 2017年7月21日
Arxiv
64+阅读 · 2021年6月18日
Arxiv
5+阅读 · 2020年6月16日
Arxiv
3+阅读 · 2018年10月5日
A Multi-Objective Deep Reinforcement Learning Framework
VIP会员
相关资讯
强化学习扫盲贴:从Q-learning到DQN
夕小瑶的卖萌屋
52+阅读 · 2019年10月13日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
26+阅读 · 2019年5月18日
meta learning 17年:MAML SNAIL
CreateAMind
11+阅读 · 2019年1月2日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
Reinforcement Learning: An Introduction 2018第二版 500页
CreateAMind
11+阅读 · 2018年4月27日
论文浅尝 | Reinforcement Learning for Relation Classification
开放知识图谱
9+阅读 · 2017年12月10日
强化学习族谱
CreateAMind
26+阅读 · 2017年8月2日
强化学习 cartpole_a3c
CreateAMind
9+阅读 · 2017年7月21日
Top
微信扫码咨询专知VIP会员