Solving high-dimensional Fokker-Planck (FP) equations is a challenge in computational physics and stochastic dynamics, due to the curse of dimensionality (CoD) and the bottleneck of evaluating second-order diffusion terms. Existing deep learning approaches, such as Physics-Informed Neural Networks, face computational challenges as dimensionality increases, driven by the $O(d^2)$ complexity of automatic differentiation for second-order derivatives. While recent probability flow approaches bypass this by learning score functions or matching velocity fields, they often involve serial operations or depend on sampling efficiency in complex distributions. To address these issues, we propose the Adaptive Probability Flow Residual Minimization (A-PFRM) method. We reformulate the second-order FP equation into an equivalent first-order deterministic Probability Flow ODE (PF-ODE) constraint, which avoids explicit Hessian computation. Unlike score matching or velocity matching, A-PFRM solves this problem by minimizing the residual of the continuity equation induced by the PF-ODE. We leverage Continuous Normalizing Flows combined with the Hutchinson Trace Estimator to reduce the training complexity to linear scale $O(d)$, achieving an effective $O(1)$ wall-clock time on GPUs. To address data sparsity in high dimensions, we apply a generative adaptive sampling strategy and theoretically prove that dynamically aligning collocation points with the evolving probability mass is a necessary condition to bound the approximation error. Experiments on diverse benchmarks -- ranging from anisotropic Ornstein-Uhlenbeck (OU) processes and high-dimensional Brownian motions with time-varying diffusion terms, to Geometric OU processes featuring non-Gaussian solutions -- demonstrate that A-PFRM effectively mitigates the CoD, maintaining high accuracy and constant temporal cost for problems up to 100 dimensions.
翻译:求解高维Fokker-Planck(FP)方程是计算物理和随机动力学领域的挑战,这源于维度灾难(CoD)以及计算二阶扩散项的效率瓶颈。现有的深度学习方法(如物理信息神经网络)在维度增加时面临计算挑战,这主要源于自动微分计算二阶导数所需的$O(d^2)$复杂度。虽然近期的概率流方法通过学习得分函数或匹配速度场规避了这一问题,但它们通常涉及串行操作或依赖于复杂分布中的采样效率。为解决这些问题,我们提出了自适应概率流残差最小化(A-PFRM)方法。我们将二阶FP方程重新表述为等价的一阶确定性概率流常微分方程(PF-ODE)约束,从而避免显式计算Hessian矩阵。与得分匹配或速度匹配不同,A-PFRM通过最小化PF-ODE导出的连续性方程残差来求解该问题。我们结合连续归一化流与Hutchinson迹估计器,将训练复杂度降低至线性尺度$O(d)$,在GPU上实现了有效的$O(1)$实际计算时间。针对高维数据稀疏性问题,我们采用生成式自适应采样策略,并从理论上证明:动态调整配置点使其与演化的概率质量对齐,是限制近似误差的必要条件。在多样化基准测试上的实验——包括各向异性Ornstein-Uhlenbeck(OU)过程、具有时变扩散项的高维布朗运动,以及具有非高斯解特征的几何OU过程——均表明A-PFRM能有效缓解维度灾难,对于高达100维的问题仍能保持高精度和恒定的时间成本。