Qualitative coding is a demanding yet crucial research method in the field of Human-Computer Interaction (HCI). While recent studies have shown the capability of large language models (LLMs) to perform qualitative coding within theoretical frameworks, their potential for collaborative human-LLM discovery and generation of new insights beyond initial theory remains underexplored. To bridge this gap, we proposed CHALET, a novel approach that harnesses the power of human-LLM partnership to advance theory-driven qualitative analysis by facilitating iterative coding, disagreement analysis, and conceptualization of qualitative data. We demonstrated CHALET's utility by applying it to the qualitative analysis of conversations related to mental-illness stigma, using the attribution model as the theoretical framework. Results highlighted the unique contribution of human-LLM collaboration in uncovering latent themes of stigma across the cognitive, emotional, and behavioral dimensions. We discuss the methodological implications of the human-LLM collaborative approach to theory-based qualitative analysis for the HCI community and beyond.
翻译:定性编码是人机交互(HCI)领域中一项要求严格但至关重要的研究方法。尽管近期研究表明大型语言模型(LLMs)能够在理论框架内执行定性编码,但其在超越初始理论、实现人机协同发现与生成新见解方面的潜力仍未得到充分探索。为填补这一空白,我们提出了CHALET——一种创新方法,通过促进迭代编码、分歧分析和定性数据概念化,利用人机协作的力量推进理论驱动的定性分析。我们通过将CHALET应用于心理健康污名相关对话的定性分析(以归因模型作为理论框架),验证了其有效性。结果突显了人机协作在揭示认知、情感和行为维度上潜在污名主题方面的独特贡献。我们讨论了人机协同方法对基于理论的定性分析在HCI领域及其他学科中的方法论意义。