We present Imitation-Projected Policy Gradient (IPPG), an algorithmic framework for learning policies that are parsimoniously represented in a structured programming language. Such programmatic policies can be more interpretable, generalizable, and amenable to formal verification than neural policies; however, designing rigorous learning approaches for programmatic policies remains a challenge. IPPG, our response to this challenge, is based on three insights. First, we view our learning task as optimization in policy space, modulo the constraint that the desired policy has a programmatic representation, and solve this optimization problem using a "lift-and-project" perspective that takes a gradient step into the unconstrained policy space and then projects back onto the constrained space. Second, we view the unconstrained policy space as mixing neural and programmatic representations, which enables employing state-of-the-art deep policy gradient approaches. Third, we cast the projection step as program synthesis via imitation learning, and exploit contemporary combinatorial methods for this task. We present theoretical convergence results for IPPG, as well as an empirical evaluation in three continuous control domains. The experiments show that IPPG can significantly outperform state-of-the-art approaches for learning programmatic policies.
翻译:我们提出了一个理论框架,用于学习以结构化的编程语言体现的有条不紊的学习政策。这种方案政策可以比神经政策更容易解释、笼统和容易接受正式核查;然而,为方案政策设计严格的学习方法仍是一项挑战。 IPPG,我们应对这一挑战的对策基于三个洞察力。第一,我们认为我们的学习任务是优化政策空间,将理想政策具有方案代表性的制约因素调和,并用“提升和项目”视角解决这一优化问题,从梯度角度进入不受限制的政策空间,然后项目回到受限制的空间。第二,我们认为未受限制的政策空间是将神经和方案表达方式混合起来,从而能够采用最先进的深度政策梯度方法。第三,我们把预测作为方案综合步骤,通过模仿学习,利用现代组合方法完成这项任务。我们提出了IPPG的理论趋同结果,并在三个连续的控制领域进行了经验评估。实验表明,IPPGG可以大大超越了方案化方法。