Multilingual e-commerce search is challenging due to linguistic diversity and the noise inherent in user-generated queries. This paper documents the solution employed by our team (EAR-MP) for the CIKM 2025 AnalytiCup, which addresses two core tasks: Query-Category (QC) relevance and Query-Item (QI) relevance. Our approach first normalizes the multilingual dataset by translating all text into English, then mitigates noise through extensive data cleaning and normalization. For model training, we build on DeBERTa-v3-large and improve performance with label smoothing, self-distillation, and dropout. In addition, we introduce task-specific upgrades, including hierarchical token injection for QC and a hybrid scoring mechanism for QI. Under constrained compute, our method achieves competitive results, attaining an F1 score of 0.8796 on QC and 0.8744 on QI. These findings underscore the importance of systematic data preprocessing and tailored training strategies for building robust, resource-efficient multilingual relevance systems.
翻译:多语言电子商务搜索因语言多样性和用户生成查询中固有的噪声而极具挑战性。本文记录了我们团队(EAR-MP)为CIKM 2025 AnalytiCup所采用的解决方案,该方案针对两个核心任务:查询-类别(QC)相关性和查询-商品(QI)相关性。我们的方法首先通过将所有文本翻译成英语来规范化多语言数据集,然后通过广泛的数据清洗和归一化来降低噪声。在模型训练方面,我们基于DeBERTa-v3-large进行构建,并通过标签平滑、自蒸馏和dropout技术提升了性能。此外,我们引入了任务特定的改进,包括针对QC的分层令牌注入和针对QI的混合评分机制。在计算资源受限的条件下,我们的方法取得了具有竞争力的结果,在QC任务上获得了0.8796的F1分数,在QI任务上获得了0.8744的F1分数。这些发现强调了系统的数据预处理和定制化的训练策略对于构建鲁棒且资源高效的多语言相关性系统的重要性。