Purpose: It has become increasingly likely that Large Language Models (LLMs) will be used to score the quality of academic publications to support research assessment goals in the future. This may cause problems for fields with competing paradigms since there is a risk that one may be favoured, causing long term harm to the reputation of the other. Design/methodology/approach: To test whether this is plausible, this article uses 17 ChatGPTs to evaluate up to 100 journal articles from each of eight pairs of competing sociology paradigms (1490 altogether). Each article was assessed by prompting ChatGPT to take one of five roles: paradigm follower, opponent, antagonistic follower, antagonistic opponent, or neutral. Findings: Articles were scored highest by ChatGPT when it followed the aligning paradigm, and lowest when it was told to devalue it and to follow the opposing paradigm. Broadly similar patterns occurred for most of the paradigm pairs. Follower ChatGPTs displayed only a small amount of favouritism compared to neutral ChatGPTs, but articles evaluated by an opposing paradigm ChatGPT had a substantial disadvantage. Research limitations: The data covers a single field and LLM. Practical implications: The results confirm that LLM instructions for research evaluation should be carefully designed to ensure that they are paradigm-neutral to avoid accidentally resolving conflicts between paradigms on a technicality by devaluing one side's contributions. Originality/value: This is the first demonstration that LLMs can be prompted to show a partiality for academic paradigms.
翻译:目的:大型语言模型(LLM)未来很可能被用于评估学术出版物的质量,以支持研究评估目标。这对于存在竞争范式的领域可能带来问题,因为存在某一范式被偏袒的风险,从而长期损害另一范式的声誉。设计/方法论/方法:为验证这一可能性,本文使用17个ChatGPT实例评估了来自八对社会学竞争范式(共1490篇)的期刊文章,每对范式最多100篇。通过提示ChatGPT扮演五种角色之一(范式追随者、反对者、对抗性追随者、对抗性反对者或中立者)对每篇文章进行评估。研究发现:当ChatGPT遵循对齐范式时,文章评分最高;当要求其贬低该范式并遵循对立范式时,评分最低。大多数范式对都呈现出基本相似的模式。与中立ChatGPT相比,追随者ChatGPT仅表现出轻微偏袒,但由对立范式ChatGPT评估的文章则处于显著劣势。研究局限:数据仅覆盖单一学科领域和单一LLM模型。实践意义:研究结果证实,用于研究评估的LLM指令需精心设计,确保范式中立性,避免因技术性原因贬低某一方的贡献而意外解决范式间冲突。原创性/价值:本研究首次证明,通过提示工程可引导LLM表现出对学术范式的倾向性。