Despite their strong performance in multimodal emotion reasoning, existing Multimodal Large Language Models (MLLMs) often overlook the scenarios involving emotion conflicts, where emotional cues from different modalities are inconsistent. To fill this gap, we first introduce CA-MER, a new benchmark designed to examine MLLMs under realistic emotion conflicts. It consists of three subsets: video-aligned, audio-aligned, and consistent, where only one or all modalities reflect the true emotion. However, evaluations on our CA-MER reveal that current state-of-the-art emotion MLLMs systematically over-rely on audio signal during emotion conflicts, neglecting critical cues from visual modality. To mitigate this bias, we propose MoSEAR, a parameter-efficient framework that promotes balanced modality integration. MoSEAR consists of two modules: (1)MoSE, modality-specific experts with a regularized gating mechanism that reduces modality bias in the fine-tuning heads; and (2)AR, an attention reallocation mechanism that rebalances modality contributions in frozen backbones during inference. Our framework offers two key advantages: it mitigates emotion conflicts and improves performance on consistent samples-without incurring a trade-off between audio and visual modalities. Experiments on multiple benchmarks-including MER2023, EMER, DFEW, and our CA-MER-demonstrate that MoSEAR achieves state-of-the-art performance, particularly under modality conflict conditions.
翻译:尽管现有的多模态大语言模型(MLLMs)在多模态情感推理任务中表现出色,但它们往往忽视了情感冲突场景,即不同模态间的情感线索存在不一致性。为填补这一空白,我们首先提出了CA-MER基准,该基准旨在评估MLLMs在真实情感冲突场景下的性能。它包含三个子集:视频对齐、音频对齐和一致性子集,其中分别只有单一模态或所有模态反映真实情感。然而,在CA-MER上的评估表明,当前最先进的情感MLLMs在情感冲突中系统性地过度依赖音频信号,忽视了视觉模态的关键线索。为缓解这种偏差,我们提出了MoSEAR框架,一种参数高效的平衡模态集成方法。MoSEAR包含两个模块:(1)MoSE,即带有正则化门控机制的模态特定专家模块,用于减少微调头中的模态偏差;(2)AR,即注意力重分配机制,在推理过程中重新平衡冻结主干网络中的模态贡献。我们的框架具有两大优势:它既能缓解情感冲突,又能提升一致样本上的性能,且无需在音频与视觉模态间进行权衡。在多个基准(包括MER2023、EMER、DFEW及我们提出的CA-MER)上的实验表明,MoSEAR实现了最先进的性能,尤其在模态冲突条件下表现突出。