Optimization of deep neural networks (DNNs) has been a driving force in the advancement of modern machine learning and artificial intelligence. With DNNs characterized by a prolonged sequence of nonlinear propagation, determining their optimal parameters given an objective naturally fits within the framework of Optimal Control Programming. Such an interpretation of DNNs as dynamical systems has proven crucial in offering a theoretical foundation for principled analysis from numerical equations to physics. In parallel to these theoretical pursuits, this paper focuses on an algorithmic perspective. Our motivated observation is the striking algorithmic resemblance between the Backpropagation algorithm for computing gradients in DNNs and the optimality conditions for dynamical systems, expressed through another backward process known as dynamic programming. Consolidating this connection, where Backpropagation admits a variational structure, solving an approximate dynamic programming up to the first-order expansion leads to a new class of optimization methods exploring higher-order expansions of the Bellman equation. The resulting optimizer, termed Optimal Control Theoretic Neural Optimizer (OCNOpt), enables rich algorithmic opportunities, including layer-wise feedback policies, game-theoretic applications, and higher-order training of continuous-time models such as Neural ODEs. Extensive experiments demonstrate that OCNOpt improves upon existing methods in robustness and efficiency while maintaining manageable computational complexity, paving new avenues for principled algorithmic design grounded in dynamical systems and optimal control theory.
翻译:深度神经网络(DNN)的优化一直是推动现代机器学习和人工智能发展的关键动力。由于深度神经网络具有长序列非线性传播的特性,在给定目标下确定其最优参数的问题自然地契合了最优控制规划的框架。将深度神经网络解释为动力系统的这种观点,已被证明对于建立从数值方程到物理学的原理性分析理论基础至关重要。在追求这些理论进展的同时,本文聚焦于算法视角。我们的核心观察是:深度神经网络中计算梯度的反向传播算法,与动力系统的最优性条件(通过另一个被称为动态规划的后向过程表达)之间存在着显著的算法相似性。通过整合这一联系——其中反向传播具有变分结构——求解一阶展开近似下的动态规划,导出了一类探索贝尔曼方程高阶展开的新型优化方法。由此产生的优化器,称为最优控制理论神经优化器(OCNOpt),开启了丰富的算法可能性,包括层间反馈策略、博弈论应用,以及连续时间模型(如神经常微分方程)的高阶训练。大量实验表明,OCNOpt在鲁棒性和效率上优于现有方法,同时保持了可管理的计算复杂度,为基于动力系统和最优控制理论的原理性算法设计开辟了新途径。