Consistency Models (CMs) have shown promise for efficient one-step generation. However, most existing CMs rely on manually designed discretization schemes, which can cause repeated adjustments for different noise schedules and datasets. To address this, we propose a unified framework for the automatic and adaptive discretization of CMs, formulating it as an optimization problem with respect to the discretization step. Concretely, during the consistency training process, we propose using local consistency as the optimization objective to ensure trainability by avoiding excessive discretization, and taking global consistency as a constraint to ensure stability by controlling the denoising error in the training target. We establish the trade-off between local and global consistency with a Lagrange multiplier. Building on this framework, we achieve adaptive discretization for CMs using the Gauss-Newton method. We refer to our approach as ADCMs. Experiments demonstrate that ADCMs significantly improve the training efficiency of CMs, achieving superior generative performance with minimal training overhead on both CIFAR-10 and ImageNet. Moreover, ADCMs exhibit strong adaptability to more advanced DM variants. Code is available at https://github.com/rainstonee/ADCM.
翻译:一致性模型(CMs)在高效单步生成方面展现出潜力。然而,现有大多数CM依赖于人工设计的离散化方案,这可能导致针对不同噪声调度和数据集需要重复调整。为解决此问题,我们提出了一个统一框架,用于实现CM的自动自适应离散化,并将其表述为关于离散化步长的优化问题。具体而言,在一致性训练过程中,我们提出以局部一致性作为优化目标,通过避免过度离散化来确保模型可训练性,并以全局一致性作为约束条件,通过控制训练目标中的去噪误差来保证稳定性。我们通过拉格朗日乘子建立了局部一致性与全局一致性之间的权衡关系。基于此框架,我们利用高斯-牛顿法实现了CM的自适应离散化。我们将该方法称为ADCMs。实验表明,ADCMs显著提升了CM的训练效率,在CIFAR-10和ImageNet数据集上以最小的训练开销实现了更优的生成性能。此外,ADCMs对更先进的扩散模型变体展现出强大的适应能力。代码发布于https://github.com/rainstonee/ADCM。