The integration of large language models (LLMs) into healthcare IoT systems promises faster decisions and improved medical support. LLMs are also deployed as multi-agent teams to assist AI doctors by debating, voting, or advising on decisions. However, when multiple assistant agents interact, coordinated adversaries can collude to create false consensus, pushing an AI doctor toward harmful prescriptions. We develop an experimental framework with scripted and unscripted doctor agents, adversarial assistants, and a verifier agent that checks decisions against clinical guidelines. Using 50 representative clinical questions, we find that collusion drives the Attack Success Rate (ASR) and Harmful Recommendation Rates (HRR) up to 100% in unprotected systems. In contrast, the verifier agent restores 100% accuracy by blocking adversarial consensus. This work provides the first systematic evidence of collusion risk in AI healthcare and demonstrates a practical, lightweight defence that ensures guideline fidelity.
翻译:大型语言模型(LLMs)与医疗物联网系统的整合有望加速决策并改善医疗支持。LLMs还被部署为多智能体团队,通过辩论、投票或提供建议来辅助AI医生进行决策。然而,当多个辅助智能体交互时,协调的对抗者可能共谋制造虚假共识,推动AI医生开具有害处方。我们开发了一个实验框架,包含脚本化和非脚本化的医生智能体、对抗性辅助智能体,以及一个根据临床指南核查决策的验证智能体。使用50个代表性临床问题,我们发现共谋在无保护系统中将攻击成功率(ASR)和有害建议率(HRR)提升至100%。相比之下,验证智能体通过阻断对抗性共识恢复了100%的准确性。这项研究首次系统性地证明了AI医疗中的共谋风险,并展示了一种确保指南依从性的实用、轻量级防御方法。