Score-based diffusion models (SDMs) have emerged as a powerful tool for sampling from the posterior distribution in Bayesian inverse problems. However, existing methods often require multiple evaluations of the forward mapping to generate a single sample, resulting in significant computational costs for large-scale inverse problems. To address this, we propose an unconditional representation of the conditional score function (UCoS) tailored to linear inverse problems, which avoids forward model evaluations during sampling by shifting computational effort to an offline training phase. In this phase, a \emph{task-dependent} score function is learned based on the linear forward operator. Crucially, we show that the conditional score can be derived \emph{exactly} from a trained (unconditional) score using affine transformations, eliminating the need for conditional score approximations. Our approach is formulated in infinite-dimensional function spaces, making it inherently discretization-invariant. We support this formulation with a rigorous convergence analysis that justifies UCoS beyond any specific discretization. Finally we validate UCoS through high-dimensional computed tomography (CT) and image deblurring experiments, demonstrating both scalability and accuracy.
翻译:基于分数的扩散模型已成为从贝叶斯反问题的后验分布中采样的强大工具。然而,现有方法通常需要多次评估前向映射才能生成单个样本,导致大规模反问题计算成本显著。为此,我们提出了一种针对线性反问题设计的条件分数函数的无条件表示,该方法通过将计算负担转移至离线训练阶段,避免了采样过程中的前向模型评估。在此阶段,基于线性前向算子学习一个任务依赖的分数函数。关键的是,我们证明了条件分数可以通过仿射变换从训练好的无条件分数中精确导出,从而无需条件分数近似。我们的方法在无限维函数空间中构建,使其本质上具有离散化不变性。我们通过严格的收敛分析支持该表述,证明了UCoS方法超越任何特定离散化的有效性。最后,我们通过高维计算机断层扫描和图像去模糊实验验证了UCoS的可扩展性和准确性。