This article provides a rigorous analysis of convergence and stability of Episodic Upside-Down Reinforcement Learning, Goal-Conditioned Supervised Learning and Online Decision Transformers. These algorithms performed competitively across various benchmarks, from games to robotic tasks, but their theoretical understanding is limited to specific environmental conditions. This work initiates a theoretical foundation for algorithms that build on the broad paradigm of approaching reinforcement learning through supervised learning or sequence modeling. At the core of this investigation lies the analysis of conditions on the underlying environment, under which the algorithms can identify optimal solutions. We also assess whether emerging solutions remain stable in situations where the environment is subject to tiny levels of noise. Specifically, we study the continuity and asymptotic convergence of command-conditioned policies, values and the goal-reaching objective depending on the transition kernel of the underlying Markov Decision Process. We demonstrate that near-optimal behavior is achieved if the transition kernel is located in a sufficiently small neighborhood of a deterministic kernel. The mentioned quantities are continuous (with respect to a specific topology) at deterministic kernels, both asymptotically and after a finite number of learning cycles. The developed methods allow us to present the first explicit estimates on the convergence and stability of policies and values in terms of the underlying transition kernels. On the theoretical side we introduce a number of new concepts to reinforcement learning, like working in segment spaces, studying continuity in quotient topologies and the application of the fixed-point theory of dynamical systems. The theoretical study is accompanied by a detailed investigation of example environments and numerical experiments.
翻译:本文对分段式倒置强化学习、目标条件监督学习及在线决策Transformer的收敛性与稳定性进行了严格的理论分析。这些算法在从游戏到机器人任务等多种基准测试中均表现出竞争力,但其理论理解目前仅限于特定环境条件。本研究为基于监督学习或序列建模范式解决强化学习问题的算法奠定了理论基础。本研究的核心在于分析底层环境条件,探究算法在何种条件下能够识别最优解。我们同时评估了当环境受到微小噪声干扰时,所生成的解是否保持稳定。具体而言,我们研究了指令条件策略、价值函数及目标达成目标的连续性与渐近收敛性,这些性质取决于底层马尔可夫决策过程的转移核。我们证明,若转移核位于确定性核的足够小邻域内,则可实现接近最优的行为。上述量在确定性核处(相对于特定拓扑结构)具有连续性,无论是渐近意义下还是经过有限次学习循环后均成立。所发展的方法使我们能够首次依据底层转移核给出策略与价值函数收敛性和稳定性的显式估计。在理论层面,我们为强化学习引入了若干新概念,如在分段空间中工作、研究商拓扑中的连续性以及动力系统不动点理论的应用。理论分析辅以典型环境的详细考察与数值实验验证。