Diffusion probabilistic models (DPMs) represent a class of powerful generative models. Despite their success, the inference of DPMs is expensive since it generally needs to iterate over thousands of timesteps. A key problem in the inference is to estimate the variance in each timestep of the reverse process. In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function. Building upon it, we propose Analytic-DPM, a training-free inference framework that estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and a pretrained score-based model. Further, to correct the potential bias caused by the score-based model, we derive both lower and upper bounds of the optimal variance and clip the estimate for a better result. Empirically, our analytic-DPM improves the log-likelihood of various DPMs, produces high-quality samples, and meanwhile enjoys a 20x to 80x speed up.
翻译:聚变概率模型(DPMs)代表了一组强大的基因化模型(DPM),尽管取得了成功,但DPM的推论费用昂贵,因为它通常需要绕过数千个时段。推论的一个关键问题是估计反向进程每个时段的差异。在这项工作中,我们提出了一个令人惊讶的结果,即DPM的最佳反向差异和相应的最佳KL差异都具有分析形式 w.r.t.它的得分功能。在此基础上,我们提议分析-DPM,即一个不培训的推论框架,用蒙特卡洛方法和预先训练的分数模型来估计差异和KL差异的分析形式和KL差异。此外,为了纠正基于分数模型的潜在偏差,我们从最佳差异的上下限和上限得出了更好的估计结果。我们的分析-DPM改进了各种DPM的逻辑相似性,生成了高质量的样品,并同时享有20x至80x速度。