Speech Large Language Models (SpeechLLMs) have achieved breakthroughs in multilingual speech-to-text translation (S2TT). However, existing approaches often overlook semantic commonalities across source languages, leading to biased translation performance. In this work, we propose \textbf{POTSA} (Parallel Optimal Transport for Speech Alignment), a new framework based on cross-lingual parallel speech pairs and Optimal Transport (OT), designed to bridge high- and low-resource translation gaps. First, we introduce a Bias Compensation module to coarsely align initial speech representations across languages. Second, we impose token-level OT constraints on a Q-Former using parallel speech pairs to establish fine-grained consistency of representations. Then, we apply a layer scheduling strategy to focus OT constraints on the most semantically beneficial layers. Experiments on the FLEURS dataset show that our method achieves SOTA performance, with +0.93 BLEU on average over five common languages and +5.05 BLEU on zero-shot languages, using only 10 hours of parallel speech per source language.
翻译:语音大语言模型(SpeechLLMs)在多语言语音到文本翻译(S2TT)领域取得了突破性进展。然而,现有方法往往忽视了源语言之间的语义共性,导致翻译性能存在偏差。本文提出 \textbf{POTSA}(基于平行最优传输的语音对齐),这是一个基于跨语言平行语音对和最优传输(OT)的新框架,旨在弥合高资源与低资源语言之间的翻译差距。首先,我们引入一个偏差补偿模块,用于在语言间对初始语音表征进行粗粒度对齐。其次,我们利用平行语音对在 Q-Former 上施加词元级 OT 约束,以建立细粒度的表征一致性。随后,我们采用层调度策略,将 OT 约束聚焦于语义收益最大的网络层。在 FLEURS 数据集上的实验表明,我们的方法仅需每种源语言 10 小时的平行语音数据,即可实现最先进的性能:在五种常见语言上平均 BLEU 得分提升 +0.93,在零样本语言上提升 +5.05 BLEU。