We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a $d$-dimensional state space and the discounted factor $\gamma \in (0,1)$, given an arbitrary sample path with "covering time" $ L $, we establish that the algorithm is guaranteed to output an $\varepsilon$-accurate estimate of the optimal Q-function using $\tilde{O}\big(L/(\varepsilon^3(1-\gamma)^7)\big)$ samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as $ \tilde{O}\big(1/\varepsilon^d\big),$ so the sample complexity scales as $\tilde{O}\big(1/\varepsilon^{d+3}\big).$ Indeed, we establish a lower bound that argues that the dependence of $ \tilde{\Omega}\big(1/\varepsilon^{d+2}\big)$ is necessary.
翻译:我们考虑对具有连续状态空间和未知的过渡内核的无限偏差的Markov决定进程(MDPs)进行不设模型的强化学习,只有系统任意政策下的单一样本路径。我们考虑近邻Q学习(NNQL)算法,以便使用最近的邻居回归法学习最佳Q函数。作为主要贡献,我们提供对趋同率的严格有限抽样分析。特别是,对于具有美元米价空间和贴现系数$gamma\in (0,1美元)的MDP,如果存在“覆盖时间”的任意抽样路径,我们确定算法保证使用$tilde{Obig(L/(\ varepsilon3)3(1-\gamma)=7)\big。例如,对于拥有美元维度空间空间的MDP,在纯随机政策范围内的样本路径覆盖时间 $_vareblebleblex_O