Visible-to-thermal face image matching is a challenging variate of cross-modality recognition. The challenge lies in the large modality gap and low correlation between visible and thermal modalities. Existing approaches employ image preprocessing, feature extraction, or common subspace projection, which are independent problems in themselves. In this paper, we propose an end-to-end framework for cross-modal face recognition. The proposed algorithm aims to learn identity-discriminative features from unprocessed facial images and identify cross-modal image pairs. A novel Unit-Class Loss is proposed for preserving identity information while discarding modality information. In addition, a Cross-Modality Discriminator block is proposed for integrating image-pair classification capability into the network. The proposed network can be used to extract modality-independent vector representations or a matching-pair classification for test images. Our cross-modality face recognition experiments on five independent databases demonstrate that the proposed method achieves marked improvement over existing state-of-the-art methods.
翻译:可见到热面图像匹配是一个具有挑战性的跨模式识别变异。挑战在于模式差异巨大,可见方式和热模式之间的相关性较低。现有方法采用图像预处理、特征提取或共同子空间投影等独立问题。在本文件中,我们提议了一个跨模式面貌识别端到端框架。提议的算法旨在从未经处理的面部图像中学习身份差异特征,并识别交叉模式图像配对。提出了在丢弃模式信息的同时保存身份信息的新单位-分类损失。此外,还提议了一个跨模式差异方块,将图像光谱分类能力纳入网络。拟议的网络可用于提取模式独立的矢量表达或测试图像的配对面分类。我们关于五个独立数据库的交叉模式面辨识实验表明,拟议方法比现有最新方法有了显著改进。