The expansion of the Internet and social networks has led to an explosion of user-generated content. Author intent understanding plays a crucial role in interpreting social media content. This paper addresses author intent classification in Bangla social media posts by leveraging both textual and visual data. Recognizing limitations in previous unimodal approaches, we systematically benchmark transformer-based language models (mBERT, DistilBERT, XLM-RoBERTa) and vision architectures (ViT, Swin, SwiftFormer, ResNet, DenseNet, MobileNet), utilizing the Uddessho dataset of 3,048 posts spanning six practical intent categories. We introduce a novel intermediate fusion strategy that significantly outperforms early and late fusion on this task. Experimental results show that intermediate fusion, particularly with mBERT and Swin Transformer, achieves 84.11% macro-F1 score, establishing a new state-of-the-art with an 8.4 percentage-point improvement over prior Bangla multimodal approaches. Our analysis demonstrates that integrating visual context substantially enhances intent classification. Cross-modal feature integration at intermediate levels provides optimal balance between modality-specific representation and cross-modal learning. This research establishes new benchmarks and methodological standards for Bangla and other low-resource languages. We call our proposed framework BangACMM (Bangla Author Content MultiModal).
翻译:互联网和社交网络的扩展导致了用户生成内容的爆炸式增长。作者意图理解在社交媒体内容解读中起着关键作用。本文通过利用文本和视觉数据,针对孟加拉语社交媒体帖子中的作者意图分类问题展开研究。认识到先前单模态方法的局限性,我们系统性地对基于Transformer的语言模型(mBERT、DistilBERT、XLM-RoBERTa)和视觉架构(ViT、Swin、SwiftFormer、ResNet、DenseNet、MobileNet)进行了基准测试,使用了涵盖六个实用意图类别的3,048条帖子的Uddessho数据集。我们提出了一种新颖的中间融合策略,在该任务上显著优于早期融合和晚期融合方法。实验结果表明,中间融合策略,特别是结合mBERT和Swin Transformer,实现了84.11%的宏观F1分数,创造了新的最先进水平,相比先前孟加拉语多模态方法提升了8.4个百分点。我们的分析表明,整合视觉上下文能显著增强意图分类能力。在中间层次进行跨模态特征融合,为模态特定表示和跨模态学习提供了最佳平衡。这项研究为孟加拉语及其他低资源语言建立了新的基准和方法学标准。我们将所提出的框架命名为BangACMM(孟加拉语作者内容多模态框架)。