This study explores the dynamics of trust in artificial intelligence (AI) agents, particularly large language models (LLMs), by introducing the concept of "deferred trust", a cognitive mechanism where distrust in human agents redirects reliance toward AI perceived as more neutral or competent. Drawing on frameworks from social psychology and technology acceptance models, the research addresses gaps in user-centric factors influencing AI trust. Fifty-five undergraduate students participated in an experiment involving 30 decision-making scenarios (factual, emotional, moral), selecting from AI agents (e.g., ChatGPT), voice assistants, peers, adults, or priests as guides. Data were analyzed using K-Modes and K-Means clustering for patterns, and XGBoost models with SHAP interpretations to predict AI selection based on sociodemographic and prior trust variables. Results showed adults (35.05\%) and AI (28.29\%) as the most selected agents overall. Clustering revealed context-specific preferences: AI dominated factual scenarios, while humans prevailed in social/moral ones. Lower prior trust in human agents (priests, peers, adults) consistently predicted higher AI selection, supporting deferred trust as a compensatory transfer. Participant profiles with higher AI trust were distinguished by human distrust, lower technology use, and higher socioeconomic status. Models demonstrated consistent performance (e.g., average precision up to 0.863). Findings challenge traditional models like TAM/UTAUT, emphasizing relational and epistemic dimensions in AI trust. They highlight risks of over-reliance due to fluency effects and underscore the need for transparency to calibrate vigilance. Limitations include sample homogeneity and static scenarios; future work should incorporate diverse populations and multimodal data to refine deferred trust across contexts.
翻译:本研究通过引入'延迟信任'这一概念——一种因对人类代理的不信任而将依赖转向被认为更中立或更有能力的AI的认知机制,探讨了人工智能(AI)代理(特别是大语言模型)的信任动态。研究借鉴社会心理学和技术接受模型的框架,弥补了影响AI信任的用户中心因素的空白。55名本科生参与了一项实验,涉及30个决策场景(事实性、情感性、道德性),参与者需从AI代理(如ChatGPT)、语音助手、同龄人、成年人或神父中选择引导者。数据采用K-Modes和K-Means聚类分析模式,并使用XGBoost模型结合SHAP解释,基于社会人口学和先验信任变量预测AI选择。结果显示,成年人(35.05%)和AI(28.29%)是总体选择最多的代理。聚类分析揭示了情境特异性偏好:AI在事实性场景中占主导,而人类在社会/道德场景中更受青睐。对人类代理(神父、同龄人、成年人)较低的先验信任一致地预测了更高的AI选择,支持延迟信任作为一种补偿性转移机制。对AI信任较高的参与者特征表现为对人类的不信任、较低的技术使用频率和较高的社会经济地位。模型表现出稳定的性能(例如,平均精确度高达0.863)。研究结果挑战了TAM/UTAUT等传统模型,强调了AI信任中的关系和认知维度。研究指出因流畅性效应导致的过度依赖风险,并强调需要透明度以校准警惕性。局限性包括样本同质性和静态场景;未来工作应纳入多样化人群和多模态数据,以完善不同情境下的延迟信任理论。