Over the last several years, research on facial recognition based on Deep Neural Network has evolved with approaches like task-specific loss functions, image normalization and augmentation, network architectures, etc. However, there have been few approaches with attention to how human faces differ from person to person. Premising that inter-personal differences are found both generally and locally on the human face, I propose FusiformNet, a novel framework for feature extraction that leverages the nature of discriminative facial features. Tested on Image-Unrestricted setting of Labeled Faces in the Wild benchmark, this method achieved a state-of-the-art accuracy of 96.67% without labeled outside data, image augmentation, normalization, or special loss functions. Likewise, the method also performed on a par with previous state-of-the-arts when pre-trained on CASIA-WebFace dataset. Considering its ability to extract both general and local facial features, the utility of FusiformNet may not be limited to facial recognition but also extend to other DNN-based tasks.
翻译:在过去几年里,基于深神经网络的面部识别研究随着任务特定损失功能、图像正常化和增强、网络结构等方法的发展而变化。然而,对于人的脸面如何因人而异,几乎没有什么办法引起注意。假设在人脸上发现个人之间的普遍差异和当地差异,我提议FusiformNet,这是利用歧视面部特征性质的新特征提取框架。在野生基准中不限制标签面部图像设置的测试中,这种方法达到了96.67%的最先进精度,没有贴标签的外部数据、图像增强、正常化或特殊损失功能。同样,在对CASIA-WebFace数据集进行预先培训时,这种方法也与以前的状态相同。考虑到它能够提取一般和当地面部特征,FusiformNet的效用可能不限于面部识别,而且还扩大到其他DNN的工作。