Optimal decision-making under partial observability requires agents to balance reducing uncertainty (exploration) against pursuing immediate objectives (exploitation). In this paper, we introduce a novel policy optimization framework for continuous partially observable Markov decision processes (POMDPs) that explicitly addresses this challenge. Our method casts policy learning as probabilistic inference in a non-Markovian Feynman--Kac model that inherently captures the value of information gathering by anticipating future observations, without requiring suboptimal approximations or handcrafted heuristics. To optimize policies under this model, we develop a nested sequential Monte Carlo (SMC) algorithm that efficiently estimates a history-dependent policy gradient under samples from the optimal trajectory distribution induced by the POMDP. We demonstrate the effectiveness of our algorithm across standard continuous POMDP benchmarks, where existing methods struggle to act under uncertainty.
翻译:在部分可观测环境下进行最优决策时,智能体需在降低不确定性(探索)与追求即时目标(利用)之间取得平衡。本文针对连续部分可观测马尔可夫决策过程(POMDPs)提出一种新颖的策略优化框架,旨在直接应对这一挑战。该方法将策略学习转化为非马尔可夫Feynman-Kac模型中的概率推断问题,该模型通过预判未来观测值,从本质上捕捉信息收集的价值,无需依赖次优近似或人工设计的启发式规则。为在此模型下优化策略,我们开发了一种嵌套式序贯蒙特卡洛(SMC)算法,该算法能基于POMDP诱导的最优轨迹分布样本,高效估计历史依赖型策略梯度。我们在标准连续POMDP基准测试中验证了算法的有效性,现有方法在这些场景下难以在不确定性条件下有效决策。