One of the basic tasks of computational language documentation (CLD) is to identify word boundaries in an unsegmented phonemic stream. While several unsupervised monolingual word segmentation algorithms exist in the literature, they are challenged in real-world CLD settings by the small amount of available data. A possible remedy is to take advantage of glosses or translation in a foreign, well-resourced, language, which often exist for such data. In this paper, we explore and compare ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information, notably introducing a new loss function for jointly learning alignment and segmentation. We experiment with an actual under-resourced language, Mboshi, and show that these techniques can effectively control the output segmentation length.
翻译:计算语言文档(CLD)的基本任务之一是在未分解的语音流中确定字界。虽然文献中存在几种未经监督的单语单语分解算法,但在现实世界中却受到少量可用数据的挑战。一种可能的补救办法是利用外语、资源丰富、语言翻译或外语,而这些数据往往有这种语言。在本文中,我们探索并比较如何利用神经机器翻译模型,用双语信息进行不受监督的边界探测,特别是引入新的损失功能,以共同学习对齐和分解。我们实验的是实际资源不足的语言Mboshi,并表明这些技术能够有效地控制输出分解长度。