Taking electron microscopy (EM) images in high-resolution is time-consuming and expensive and could be detrimental to the integrity of the samples under observation. Advances in deep learning enable us to perform super-resolution computationally, so as to obtain high-resolution images from low-resolution ones. When training super-resolution models on pairs of experimentally acquired EM images, prior models suffer from performance loss while using the pooled-training strategy due to their inability to capture inter-image dependencies and common features shared among images. Although there exist methods that take advantage of shared features among input instances in image classification tasks, they in the current form cannot be applied to super-resolution tasks because they fail to preserve an essential property in image-to-image transformation problems, which is the equivariance property to spatial permutations. To address these limitations, we propose the augmented equivariant attention networks (AEANets) with better capability to capture inter-image dependencies and shared features, while preserving the equivariance to spatial permutations. The proposed AEANets captures inter-image dependencies and common features shared among images via two augmentations on the attention mechanism; namely, the shared references and the batch-aware attention during training. We theoretically show the equivariance property of the proposed augmented attention model and experimentally show that AEANets consistently outperforms the baselines in both quantitative and visual results.
翻译:在高分辨率中电子显微镜(EM)图像中采集电子显微镜(EM)图像既费时又昂贵,而且可能损害所观察样本的完整性。深层学习的进步使我们能够进行超分辨率计算,以便从低分辨率图像中获取高分辨率图像。在对实验获得的EM图像进行超分辨率模型培训时,先期模型由于无法捕捉图像间依赖性和共同特征,而使用集合培训战略,因而造成性能损失,同时无法捕捉图像之间相互依存性和共同特征。虽然存在利用图像分类任务中输入实例共享特征的方法,但目前的形式无法用于超级分辨率任务,因为它们未能在图像到图像转换问题中保存一种基本属性,从而从低分辨率获得高分辨率图像图像的图像。为了消除这些局限性,我们建议加强静态关注网络(Eneetetets),使其更有能力捕捉到图像间依赖性和共同特征。尽管对空间变异性保持了必要性。拟议中的Aneeneetet 捕捉到内部依赖性和共同特征,无法用于超级分辨率任务,即通过两个实验性基线显示持续关注;我们提议在实验性图像中表现出共同的注意力。