The vast majority of language pairs in the world are low-resource because they have little, if any, parallel data available. Unfortunately, machine translation (MT) systems do not currently work well in this setting. Besides the technical challenges of learning with limited supervision, there is also another challenge: it is very difficult to evaluate methods trained on low resource language pairs because there are very few freely and publicly available benchmarks. In this work, we take sentences from Wikipedia pages and introduce new evaluation datasets in two very low resource language pairs, Nepali-English and Sinhala-English. These are languages with very different morphology and syntax, for which little out-of-domain parallel data is available and for which relatively large amounts of monolingual data are freely available. We describe our process to collect and cross-check the quality of translations, and we report baseline performance using several learning settings: fully supervised, weakly supervised, semi-supervised, and fully unsupervised. Our experiments demonstrate that current state-of-the-art methods perform rather poorly on this benchmark, posing a challenge to the research community working on low resource MT. Data and code to reproduce our experiments are available at https://github.com/facebookresearch/flores.
翻译:世界上绝大多数语言配对都是低资源,因为它们几乎没有平行数据。 不幸的是,机器翻译(MT)系统目前在这个环境下效果不尽如人意。除了在有限的监督下学习的技术挑战之外,还有另一个挑战:很难评估在低资源语言配对方面受过培训的方法,因为很少有自由公开的基准。在这项工作中,我们从维基百科网页上提取句子,并在两个非常低资源语言配对(尼泊尔语-英语和僧伽罗语-英语)中引入新的评价数据集。这些语言的形态和语法非常不同,因此很少有超越主体的平行数据,而且可以免费获得相对大量的单语数据。我们描述了我们收集和交叉核对翻译质量的过程,我们利用几个学习环境报告基线业绩:充分监督、监管薄弱、半监督、完全不受监督。我们的实验表明,目前这种技术状态方法在基准上表现很差,对研究社区在低资源搜索/易读MT. 数据/代码方面提出了挑战。我们可用于复制的实验。