Emotion recognition is involved in several real-world applications. With an increase in available modalities, automatic understanding of emotions is being performed more accurately. The success in Multimodal Emotion Recognition (MER), primarily relies on the supervised learning paradigm. However, data annotation is expensive, time-consuming, and as emotion expression and perception depends on several factors (e.g., age, gender, culture) obtaining labels with a high reliability is hard. Motivated by these, we focus on unsupervised feature learning for MER. We consider discrete emotions, and as modalities text, audio and vision are used. Our method, as being based on contrastive loss between pairwise modalities, is the first attempt in MER literature. Our end-to-end feature learning approach has several differences (and advantages) compared to existing MER methods: i) it is unsupervised, so the learning is lack of data labelling cost; ii) it does not require data spatial augmentation, modality alignment, large number of batch size or epochs; iii) it applies data fusion only at inference; and iv) it does not require backbones pre-trained on emotion recognition task. The experiments on benchmark datasets show that our method outperforms several baseline approaches and unsupervised learning methods applied in MER. Particularly, it even surpasses a few supervised MER state-of-the-art.
翻译:情感的认知涉及几种现实世界应用。 随着现有模式的增加,对情绪的自动理解正在得到更准确的实现。 在多式情感识别(MER)中,成功与否主要依赖于监管的学习范式。然而,数据批注是昂贵的、耗时的,由于情感表达和感知取决于若干因素(如年龄、性别、文化)获得高度可靠的标签是困难的。受这些因素的驱动,我们侧重于为MER进行不受监督的特征学习。我们考虑的是离散情绪,并且使用模式文本、音频和视觉。我们的方法基于对称模式之间的对比性损失,是MER文献的首次尝试。与现有的MER方法相比,我们的端对端特征学习方法有几种差异(和优势):i)它没有监督,因此学习缺乏数据标签成本;ii它并不要求数据空间的扩大、模式的调整、数量庞大的批量或超额的MER系统;iii它仅仅在推断中应用数据凝聚;iv)它并不要求对MERS-O进行基础的测试,因此,它需要先行式的测试。