We present an approach based on multilingual sentence embeddings to automatically extract parallel sentences from the content of WikiPedia articles in 85 languages, including several dialects or low-resource languages. We do not limit the the extraction process to alignments with English, but systematically consider all possible language pairs. In total, we are able to extract 135M parallel sentences for 1620 different language pairs, out of which only 34M are aligned with English. This corpus of parallel sentences is freely available at https://github.com/facebookresearch/LASER/tasks/WikiMatrix. To get an indication on the quality of the extracted bitexts, we train neural MT baseline systems on the mined data only for 1886 languages pairs, and evaluate them on the TED corpus, achieving strong BLEU scores for many language pairs. The WikiMatrix bitexts seem to be particularly interesting to train MT systems between distant languages without the need to pivot through English.
翻译:我们提出了一个基于多语种句子的方法,自动从85种语言的WikiPedia文章内容中提取平行句子,包括几种方言或低资源语言。我们不把提取过程限制在与英语的匹配上,而是系统地考虑所有可能的语言配对。总的来说,我们能够为1620种不同语言配对提取135M平行句子,其中只有34M对语言配对,其中只有34M对语言配对与英语配对。这一系列平行句子可以在https://github.com/facebookreearch/LASER/task/WikiMatrix上免费查阅。为了了解提取的位词的质量,我们只为1886对语言配对的被开采数据培训神经MT基线系统,并在TED文中对其进行评估,使许多对语言配对的BLEU得分很高。WikMatrix位词似乎特别有趣,可以在远程语言之间培训MT系统,而无需通过英语进行牵线。