Cross-lingual dependency parsing involves transferring syntactic knowledge from one language to another. It is a crucial component for inducing dependency parsers in low-resource scenarios where no training data for a language exists. Using Faroese as the target language, we compare two approaches using annotation projection: first, projecting from multiple monolingual source models; second, projecting from a single polyglot model which is trained on the combination of all source languages. Furthermore, we reproduce multi-source projection (Tyers et al., 2018), in which dependency trees of multiple sources are combined. Finally, we apply multi-treebank modelling to the projected treebanks, in addition to or alternatively to polyglot modelling on the source side. We find that polyglot training on the source languages produces an overall trend of better results on the target language but the single best result for the target language is obtained by projecting from monolingual source parsing models and then training multi-treebank POS tagging and parsing models on the target side.
翻译:跨语言依赖分析涉及将综合知识从一种语言转移到另一种语言,这是在没有语言培训数据的低资源情景下诱导依赖分析者的一个关键组成部分。我们用法罗群岛作为目标语言,用注解预测来比较两种方法:第一,从多种单一语言来源模型中投影;第二,从单一多语言模型中投影,该模型经过关于所有源语言组合的培训;此外,我们复制多来源依赖树的多来源预测(Tyers等人,2018年),其中将多种来源的树木合并在一起。最后,我们除了或以其他方式在来源方的多球模型中将多树银行模型应用到预测的树库中。我们发现,关于源语言的多球培训在目标语言上产生更好的总体趋势,但目标语言的单一最佳结果是通过单一语言来源定位模型投影,然后在目标方培训多树银行POS标记和评析模型。