Large Language Models (LLMs) display notable variation in multilingual behavior, yet the role of genealogical language structure in shaping this variation remains underexplored. In this paper, we investigate whether LLMs exhibit sensitivity to linguistic genera by extending prior analyses on the MultiQ dataset. We first check if models prefer to switch to genealogically related languages when prompt language fidelity is not maintained. Next, we investigate whether knowledge consistency is better preserved within than across genera. We show that genus-level effects are present but strongly conditioned by training resource availability. We further observe distinct multilingual strategies across LLMs families. Our findings suggest that LLMs encode aspects of genus-level structure, but training data imbalances remain the primary factor shaping their multilingual performance.
翻译:大语言模型在跨语言行为中表现出显著差异,然而谱系语言结构在塑造这种差异中的作用仍未得到充分探究。本文通过扩展先前对MultiQ数据集的分析,研究大语言模型是否对语言属表现出敏感性。我们首先检验当提示语言保真度未维持时,模型是否倾向于切换至谱系相关的语言。接着,我们探究知识一致性在属内是否比跨属间更好地得以保持。研究表明,属级效应确实存在,但强烈受训练资源可用性的制约。我们进一步观察到不同大语言模型系列采用差异化的多语言策略。我们的发现表明,大语言模型编码了属级结构的某些方面,但训练数据的不平衡仍是塑造其多语言性能的主要因素。