While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through on-the-fly back-translation. Together, we obtain large improvements over the previous state-of-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.
翻译:虽然机器翻译历来依赖大量的平行公司,但最近的一个研究线只用单语公司来培训神经机器翻译(NMT)和统计机器翻译(SMT)系统。 在本文中,我们通过利用子词信息、开发一个理论基础良好且未经监管的调试方法以及纳入联合改进程序,发现并解决了现有的未经监管的SMT方法的若干缺陷。 此外,我们利用改进的SMT系统来启动一个双重NMT模型,通过在飞行后反翻译进一步微调。 一起,我们在非监督机器翻译中取得了与以往最新技术的大幅改进。 例如,我们获得了2014年英文至德文WMT中的22.5个BLEU点,比以往最佳且未经监管的系统高出5.5个百分点,比2014年(被监督的)共同任务赢家多0.5个百分点。