Unsupervised neural machine translation (UNMT) that relies solely on massive monolingual corpora has achieved remarkable results in several translation tasks. However, in real-world scenarios, massive monolingual corpora do not exist for some extremely low-resource languages such as Estonian, and UNMT systems usually perform poorly when there is not adequate training corpus for one language. In this paper, we first define and analyze the unbalanced training data scenario for UNMT. Based on this scenario, we propose UNMT self-training mechanisms to train a robust UNMT system and improve its performance in this case. Experimental results on several language pairs show that the proposed methods substantially outperform conventional UNMT systems.
翻译:完全依赖大规模单一语言翻译的不受监督神经机器翻译(UNMT)在一些翻译任务中取得了显著成果,然而,在现实世界中,爱沙尼亚等一些极低资源语言并不存在大规模的单一语言翻译,而联合国MT系统在缺乏一种语言的足够培训资料时通常表现不佳。在本文件中,我们首先界定和分析联合国MTT的不平衡培训数据假设。基于这一假设,我们提议联合国MT的自我培训机制来培训一个强有力的UNMT系统,并改进这一情况下的绩效。 几个语言配对的实验结果表明,拟议的方法大大优于传统的UNMT系统。