While there has been significant progress towards developing NLU datasets and benchmarks for Indic languages, syntactic evaluation has been relatively less explored. Unlike English, Indic languages have rich morphosyntax, grammatical genders, free linear word-order, and highly inflectional morphology. In this paper, we introduce Vy\=akarana: a benchmark of gender-balanced Colorless Green sentences in Indic languages for syntactic evaluation of multilingual language models. The benchmark comprises four syntax-related tasks: PoS Tagging, Syntax Tree-depth Prediction, Grammatical Case Marking, and Subject-Verb Agreement. We use the datasets from the evaluation tasks to probe five multilingual language models of varying architectures for syntax in Indic languages. Our results show that the token-level and sentence-level representations from the Indic language models (IndicBERT and MuRIL) do not capture the syntax in Indic languages as efficiently as the other highly multilingual language models. Further, our layer-wise probing experiments reveal that while mBERT, DistilmBERT, and XLM-R localize the syntax in middle layers, the Indic language models do not show such syntactic localization.
翻译:虽然在开发印度语国家语言统一指标数据集和基准方面取得了重大进展,但合成评价的探索相对较少。与英语不同,印度语有着丰富的形态学、语法性别、自由线性单词顺序和高度纵向形态学。在本文件中,我们引入了Vyakarana:印度语中性别均衡的无色绿色句的基准,以综合评价多种语言模式。基准包括四项与合成税有关的任务:播种、语法深入预测、语法深入预测、语法案例标记和主题-Verb协议。我们使用评价任务中的数据集来探索五种多种多种语言模式的多种语言模式,用于印度语的合成税体系。我们的结果显示,印度语模式(IndicBERT和MuRIL)的代言和句级代表并不象其他高度多种语言模式那样有效地捕捉到印度语的合成。此外,我们分层研究实验显示,在本地语言中层、DistimbER和XI级模型中,我们用层研究实验显示,在本地语言中层、Dustimal化中、DIMER和XVI。