Reinforcement learning (RL) has shown promise in generating robust locomotion policies for bipedal robots, but often suffers from tedious reward design and sensitivity to poorly shaped objectives. In this work, we propose a structured reward shaping framework that leverages model-based trajectory generation and control Lyapunov functions (CLFs) to guide policy learning. We explore two model-based planners for generating reference trajectories: a reduced-order linear inverted pendulum (LIP) model for velocity-conditioned motion planning, and a precomputed gait library based on hybrid zero dynamics (HZD) using full-order dynamics. These planners define desired end-effector and joint trajectories, which are used to construct CLF-based rewards that penalize tracking error and encourage rapid convergence. This formulation provides meaningful intermediate rewards, and is straightforward to implement once a reference is available. Both the reference trajectories and CLF shaping are used only during training, resulting in a lightweight policy at deployment. We validate our method both in simulation and through extensive real-world experiments on a Unitree G1 robot. CLF-RL demonstrates significantly improved robustness relative to the baseline RL policy and better performance than a classic tracking reward RL formulation.
翻译:强化学习(RL)在生成双足机器人鲁棒运动策略方面展现出潜力,但其通常面临繁琐的奖励设计问题,并对设计不当的目标函数较为敏感。本研究提出一种结构化的奖励塑形框架,该框架利用基于模型的轨迹生成与控制李雅普诺夫函数(CLF)来引导策略学习。我们探索了两种用于生成参考轨迹的基于模型规划器:一种用于速度条件运动规划的降阶线性倒立摆(LIP)模型,以及一种基于全阶动力学并采用混合零动态(HZD)的预计算步态库。这些规划器定义了期望的末端执行器与关节轨迹,进而用于构建基于CLF的奖励函数,以惩罚跟踪误差并鼓励快速收敛。该公式提供了有意义的中间奖励,且在获得参考轨迹后易于实现。参考轨迹与CLF塑形均仅用于训练阶段,从而在部署时获得轻量化的策略。我们通过仿真以及在Unitree G1机器人上进行的大量实物实验验证了所提方法。相较于基线RL策略,CLF-RL展现出显著提升的鲁棒性,其性能亦优于经典的跟踪奖励RL框架。