Sampling from log-concave distributions is a central problem in statistics and machine learning. Prior work establishes theoretical guarantees for Langevin Monte Carlo algorithm based on overdamped and underdamped Langevin dynamics and, more recently, some third-order variants. In this paper, we introduce a new sampling algorithm built on a general $K$th-order Langevin dynamics, extending beyond second- and third-order methods. To discretize the $K$th-order dynamics, we approximate the drift induced by the potential via Lagrange interpolation and refine the node values at the interpolation points using Picard-iteration corrections, yielding a flexible scheme that fully utilizes the acceleration of higher-order Langevin dynamics. For targets with smooth, strongly log-concave densities, we prove dimension-dependent convergence in Wasserstein distance: the sampler achieves $\varepsilon$-accuracy within $\widetilde O(d^{\frac{K-1}{2K-3}}\varepsilon^{-\frac{2}{2K-3}})$ gradient evaluations for $K \ge 3$. To our best knowledge, this is the first sampling algorithm achieving such query complexity. The rate improves with the order $K$ increases, yielding better rates than existing first to third-order approaches.
翻译:从对数凹分布中采样是统计学和机器学习中的一个核心问题。先前的研究为基于过阻尼和欠阻尼朗之万动力学以及最近的一些三阶变体的朗之万蒙特卡洛算法建立了理论保证。在本文中,我们引入了一种基于通用$K$阶朗之万动力学构建的新采样算法,其扩展了二阶和三阶方法。为了离散化$K$阶动力学,我们通过拉格朗日插值来近似由势函数引起的漂移,并使用Picard迭代校正来细化插值点处的节点值,从而得到一个充分利用高阶朗之万动力学加速的灵活方案。对于具有光滑、强对数凹密度的目标分布,我们证明了在Wasserstein距离下与维度相关的收敛性:对于$K \ge 3$,采样器在$\widetilde O(d^{\frac{K-1}{2K-3}}\varepsilon^{-\frac{2}{2K-3}})$次梯度评估内达到$\varepsilon$精度。据我们所知,这是第一个实现此类查询复杂度的采样算法。该速率随着阶数$K$的增加而改善,从而产生了优于现有的一阶至三阶方法的速率。