In this study, we propose a structured methodology that utilizes large language models (LLMs) in a cost-efficient and parsimonious manner, integrating the strengths of scholars and machines while offsetting their respective weaknesses. Our methodology, facilitated through a chain of thought and few-shot learning prompting from computer science, extends best practices for co-author teams in qualitative research to human-machine teams in quantitative research. This allows humans to utilize abductive reasoning and natural language to interrogate not just what the machine has done but also what the human has done. Our method highlights how scholars can manage inherent weaknesses OF LLMs using careful, low-cost techniques. We demonstrate how to use the methodology to interrogate human-machine rating discrepancies for a sample of 1,934 press releases announcing pharmaceutical alliances (1990-2017).
翻译:本研究提出一种结构化方法,以经济高效且简约的方式利用大语言模型(LLMs),整合学者与机器的优势并弥补各自缺陷。我们的方法通过计算机科学领域的思维链与少样本学习提示技术,将定性研究中合作团队的最佳实践延伸至定量研究中的人机协作团队。这使得人类能够运用溯因推理和自然语言,不仅审视机器的工作成果,也能反思人类自身的决策过程。本方法重点展示了学者如何通过精细的低成本技术来管理LLMs的固有缺陷。我们以1990-2017年间宣布制药联盟的1,934份新闻稿样本为例,演示了如何运用该方法审视人机评分差异。