Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of source images similar/dissimilar. In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image is recovered by the decoder. Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong robustness and meanwhile surpass state-of-the-art (SOTA) approaches.
翻译:红外和可见的图像聚合是图像处理领域的一个热题,目的是获得有引信的图像,保存源图像的优势。本文建议建立一个新型的基于自动编码器的聚合网络。核心思想是,编码器将图像分解成背景和详细地貌图,并分别提供低频和高频信息,拆解器恢复原始图像。为此,损失功能使源图像的背景/详细地貌图相似/不同。在试验阶段,背景和详细地貌图分别通过聚合模块合并,引信图像由解析器回收。定性和定量结果表明,我们的方法可以产生集成图像,包含突出的目标以及大量详细纹理信息,并具有很强的坚固性,同时超过最先进的(SOTA)方法。