Artificial intelligence holds great promise for expanding access to expert medical knowledge and reasoning. However, most evaluations of language models rely on static vignettes and multiple-choice questions that fail to reflect the complexity and nuance of evidence-based medicine in real-world settings. In clinical practice, physicians iteratively formulate and revise diagnostic hypotheses, adapting each subsequent question and test to what they've just learned, and weigh the evolving evidence before committing to a final diagnosis. To emulate this iterative process, we introduce the Sequential Diagnosis Benchmark, which transforms 304 diagnostically challenging New England Journal of Medicine clinicopathological conference (NEJM-CPC) cases into stepwise diagnostic encounters. A physician or AI begins with a short case abstract and must iteratively request additional details from a gatekeeper model that reveals findings only when explicitly queried. Performance is assessed not just by diagnostic accuracy but also by the cost of physician visits and tests performed. We also present the MAI Diagnostic Orchestrator (MAI-DxO), a model-agnostic orchestrator that simulates a panel of physicians, proposes likely differential diagnoses and strategically selects high-value, cost-effective tests. When paired with OpenAI's o3 model, MAI-DxO achieves 80% diagnostic accuracy--four times higher than the 20% average of generalist physicians. MAI-DxO also reduces diagnostic costs by 20% compared to physicians, and 70% compared to off-the-shelf o3. When configured for maximum accuracy, MAI-DxO achieves 85.5% accuracy. These performance gains with MAI-DxO generalize across models from the OpenAI, Gemini, Claude, Grok, DeepSeek, and Llama families. We highlight how AI systems, when guided to think iteratively and act judiciously, can advance diagnostic precision and cost-effectiveness in clinical care.
翻译:人工智能在拓展专家医学知识与推理的可及性方面前景广阔。然而,对语言模型的大多数评估依赖于静态案例描述和多项选择题,未能反映现实世界中循证医学的复杂性与细微差别。在临床实践中,医生会迭代地形成并修正诊断假设,根据已获知的信息调整后续的提问和检查,并在确定最终诊断前权衡不断演变的证据。为模拟这一迭代过程,我们引入了序列诊断基准测试,该基准将304个具有诊断挑战性的《新英格兰医学杂志》临床病理讨论会病例转化为分步诊断交互。医生或人工智能从简短的病例摘要开始,必须迭代地向一个守门员模型请求更多细节,该模型仅在收到明确查询时才会揭示检查结果。性能评估不仅基于诊断准确性,还考虑了医生问诊和检查的成本。我们还提出了MAI诊断协调器,这是一个模型无关的协调器,它模拟一个医生小组,提出可能的鉴别诊断,并策略性地选择高价值、高性价比的检查。当与OpenAI的o3模型配对使用时,MAI-DxO实现了80%的诊断准确率——是全科医生平均20%准确率的四倍。与医生相比,MAI-DxO还将诊断成本降低了20%,与现成的o3模型相比降低了70%。当配置为追求最高准确率时,MAI-DxO可达到85.5%的准确率。MAI-DxO带来的这些性能提升在OpenAI、Gemini、Claude、Grok、DeepSeek和Llama系列模型中具有普适性。我们强调,当引导人工智能系统进行迭代思考并审慎行动时,它们能够提升临床护理中的诊断精确性与成本效益。