Unlike their conventional use as estimators of probability density functions in reinforcement learning (RL), this paper introduces a novel function-approximation role for Gaussian mixture models (GMMs) as direct surrogates for Q-function losses. These parametric models, termed GMM-QFs, possess substantial representational capacity, as they are shown to be universal approximators over a broad class of functions. They are further embedded within Bellman residuals, where their learnable parameters -- a fixed number of mixing weights, together with Gaussian mean vectors and covariance matrices -- are inferred from data via optimization on a Riemannian manifold. This geometric perspective on the parameter space naturally incorporates Riemannian optimization into the policy-evaluation step of standard policy-iteration frameworks. Rigorous theoretical results are established, and supporting numerical tests show that, even without access to experience data, GMM-QFs deliver competitive performance and, in some cases, outperform state-of-the-art approaches across a range of benchmark RL tasks, all while maintaining a significantly smaller computational footprint than deep-learning methods that rely on experience data.
翻译:与强化学习中传统上作为概率密度函数估计器的用途不同,本文提出了一种新颖的函数逼近方法,即使用高斯混合模型直接作为Q函数损失的替代模型。这些参数化模型(称为GMM-QF)具有强大的表示能力,被证明能够在一大类函数上实现通用逼近。进一步地,这些模型被嵌入到贝尔曼残差中,其可学习参数——固定数量的混合权重以及高斯均值向量和协方差矩阵——通过黎曼流形上的优化从数据中推断得出。这种参数空间的几何视角自然地将黎曼优化融入标准策略迭代框架的策略评估步骤中。本文建立了严格的理论结果,并通过数值实验证明:即使在没有经验数据的情况下,GMM-QF仍能提供具有竞争力的性能,在某些情况下甚至在一系列基准强化学习任务中超越了最先进的方法,同时相比依赖经验数据的深度学习方法保持了显著更小的计算开销。