Microscopy images are crucial for life science research, allowing detailed inspection and characterization of cellular and tissue-level structures and functions. However, microscopy data are unavoidably affected by image degradations, such as noise, blur, or others. Many such degradations also contribute to a loss of image contrast, which becomes especially pronounced in deeper regions of thick samples. Today, best performing methods to increase the quality of images are based on Deep Learning approaches, which typically require ground truth (GT) data during training. Our inability to counteract blurring and contrast loss when imaging deep into samples prevents the acquisition of such clean GT data. The fact that the forward process of blurring and contrast loss deep into tissue can be modeled, allowed us to propose a new method that can circumvent the problem of unobtainable GT data. To this end, we first synthetically degraded the quality of microscopy images even further by using an approximate forward model for deep tissue image degradations. Then we trained a neural network that learned the inverse of this degradation function from our generated pairs of raw and degraded images. We demonstrated that networks trained in this way can be used out-of-distribution (OOD) to improve the quality of less severely degraded images, e.g. the raw data imaged in a microscope. Since the absolute level of degradation in such microscopy images can be stronger than the additional degradation introduced by our forward model, we also explored the effect of iterative predictions. Here, we observed that in each iteration the measured image contrast kept improving while detailed structures in the images got increasingly removed. Therefore, dependent on the desired downstream analysis, a balance between contrast improvement and retention of image details has to be found.
翻译:显微图像对于生命科学研究至关重要,能够实现对细胞和组织水平结构与功能的详细观察与表征。然而,显微数据不可避免地受到图像退化的影响,例如噪声、模糊或其他因素。许多此类退化还会导致图像对比度下降,这在厚样本的深层区域尤为明显。当前,提升图像质量的最佳方法基于深度学习技术,这类方法通常需要在训练过程中使用真实标注数据。由于在样本深层成像时无法抵消模糊和对比度损失,我们难以获取此类干净的标注数据。鉴于组织深层模糊和对比度损失的前向过程可以被建模,我们提出了一种能够规避无法获取标注数据问题的新方法。为此,我们首先通过使用深层组织图像退化的近似前向模型,进一步人为降低显微图像的质量。随后,我们训练了一个神经网络,使其能够从我们生成的原始图像与退化图像对中学习该退化函数的逆过程。我们证明,以此方式训练的网络可用于分布外场景,以提升退化程度较轻的图像(如显微镜采集的原始数据)的质量。由于此类显微图像中的绝对退化水平可能强于我们前向模型引入的额外退化,我们还探索了迭代预测的效果。在此过程中,我们观察到每次迭代中测量的图像对比度持续改善,而图像中的细节结构逐渐丢失。因此,根据目标下游分析的需求,必须在对比度提升与图像细节保留之间找到平衡点。