Diffusion probabilistic models have recently achieved remarkable success in generating high-quality images. However, balancing high perceptual quality and low distortion remains challenging in application of diffusion models in image compression. To address this issue, we propose a novel Uncertainty-Guided image compression approach with wavelet Diffusion (UGDiff). Our approach focuses on high frequency compression via the wavelet transform, since high frequency components are crucial for reconstructing image details. We introduce a wavelet conditional diffusion model for high frequency prediction, followed by a residual codec that compresses and transmits prediction residuals to the decoder. This diffusion prediction-then-residual compression paradigm effectively addresses the low fidelity issue common in direct reconstructions by existing diffusion models. Considering the uncertainty from the random sampling of the diffusion model, we further design an uncertainty-weighted rate-distortion (R-D) loss tailored for residual compression, providing a more rational trade-off between rate and distortion. Comprehensive experiments on two benchmark datasets validate the effectiveness of UGDiff, surpassing state-of-the-art image compression methods in R-D performance, perceptual quality, subjective quality, and inference time. Our code is available at: https://github.com/hejiaxiang1/Wavelet-Diffusion/tree/main.
翻译:扩散概率模型近期在生成高质量图像方面取得了显著成功。然而,在图像压缩应用中,扩散模型在平衡高感知质量与低失真方面仍面临挑战。为解决这一问题,我们提出了一种基于小波扩散与不确定性引导的新型图像压缩方法(UGDiff)。该方法通过小波变换聚焦于高频分量的压缩,因为高频分量对于重建图像细节至关重要。我们引入了一种用于高频预测的小波条件扩散模型,随后采用残差编解码器对预测残差进行压缩并传输至解码端。这种“扩散预测-残差压缩”范式有效解决了现有扩散模型在直接重建中常见的保真度不足问题。考虑到扩散模型随机采样所引入的不确定性,我们进一步设计了一种专为残差压缩定制的不确定性加权率失真(R-D)损失函数,从而在码率与失真之间实现更合理的权衡。在两个基准数据集上的综合实验验证了UGDiff的有效性,其在R-D性能、感知质量、主观质量及推理时间方面均超越了当前最先进的图像压缩方法。我们的代码公开于:https://github.com/hejiaxiang1/Wavelet-Diffusion/tree/main。