In this paper, we proposed a novel Identity-free conditional Generative Adversarial Network (IF-GAN) to explicitly reduce inter-subject variations for facial expression recognition. Specifically, for any given input face image, a conditional generative model was developed to transform an average neutral face, which is calculated from various subjects showing neutral expressions, to an average expressive face with the same expression as the input image. Since the transformed images have the same synthetic "average" identity, they differ from each other by only their expressions and thus, can be used for identity-free expression classification. In this work, an end-to-end system was developed to perform expression transformation and expression recognition in the IF-GAN framework. Experimental results on three facial expression datasets have demonstrated that the proposed IF-GAN outperforms the baseline CNN model and achieves comparable or better performance compared with the state-of-the-art methods for facial expression recognition.
翻译:在本文中,我们建议建立一个新型的无身份的有条件基因反转网络(IF-GAN),以明确减少面部表达式识别的跨主题变异。具体地说,对于任何特定输入面部图像,我们开发了一个有条件的基因化模型,将一个平均中性面孔(从显示中性表达式的不同主题计算出来)转换成一个平均表达面孔,与输入面部图像的表达式相同。由于变形图像具有相同的合成“平均”特征,因此它们彼此不同,只是它们的表达式,因此可以用于身份自由表达式分类。在这项工作中,开发了一个端到端的系统,在IF-GAN框架中进行表达式转换和表达式识别。三个面部表达式数据集的实验结果表明,拟议的IF-GAN模型比CNN基线模型更符合功能,并且取得与最先进的面部表达式识别方法相似或更好的性能。