Alzheimer's disease and related dementias(ADRD) affect nearly five million older adults in the United States, yet more than half remain undiagnosed. Speech-based natural language processing(NLP) offers a scalable approach for detecting early cognitive decline through subtle linguistic markers that may precede clinical diagnosis. This study develops and evaluates a speech-based screening pipeline integrating transformer embeddings with handcrafted linguistic features, synthetic augmentation using large language models(LLMs), and benchmarking of unimodal and multimodal classifiers. External validation assessed generalizability to a MCI-only cohort. Transcripts were drawn from the ADReSSo 2021 benchmark dataset(n=237, Pitt Corpus) and the DementiaBank Delaware corpus(n=205, MCI vs. controls). Ten transformer models were tested under three fine-tuning strategies. A late-fusion model combined embeddings from the top transformer with 110 linguistic features. Five LLMs(LLaMA8B/70B, MedAlpaca7B, Ministral8B,GPT-4o) generated label-conditioned synthetic speech for augmentation, and three multimodal LLMs(GPT-4o,Qwen-Omni,Phi-4) were evaluated in zero-shot and fine-tuned modes. On ADReSSo, the fusion model achieved F1=83.3(AUC=89.5), outperforming transformer-only and linguistic baselines. MedAlpaca7B augmentation(2x) improved F1=85.7, though larger scales reduced gains. Fine-tuning boosted unimodal LLMs(MedAlpaca7B F1=47.7=>78.7), while multimodal models performed lower (Phi-4=71.6;GPT-4o=67.6). On Delaware, the fusion plus 1x MedAlpaca7B model achieved F1=72.8(AUC=69.6). Integrating transformer and linguistic features enhances ADRD detection. LLM-based augmentation improves data efficiency but yields diminishing returns, while current multimodal models remain limited. Validation on an independent MCI cohort supports the pipeline's potential for scalable, clinically relevant early screening.
翻译:阿尔茨海默病及相关痴呆症在美国影响近五百万老年人,其中超过半数病例未被确诊。基于语音的自然语言处理技术提供了一种可扩展的早期认知衰退检测方法,通过识别临床诊断前可能出现的细微语言标记。本研究开发并评估了一套基于语音的筛查流程,该流程整合了Transformer嵌入特征与人工构建的语言学特征、利用大语言模型进行合成数据增强,并对单模态与多模态分类器进行了基准测试。外部验证评估了该流程在仅含轻度认知障碍人群队列中的泛化能力。研究转录文本来自ADReSSo 2021基准数据集(n=237,匹兹堡语料库)和DementiaBank特拉华语料库(n=205,MCI患者与对照组)。研究测试了十种Transformer模型在三种微调策略下的表现。采用晚期融合模型将最优Transformer的嵌入特征与110个语言学特征相结合。使用五种LLM(LLaMA8B/70B、MedAlpaca7B、Ministral8B、GPT-4o)生成标签条件化合成语音进行数据增强,并评估了三种多模态LLM(GPT-4o、Qwen-Omni、Phi-4)在零样本和微调模式下的性能。在ADReSSo数据集上,融合模型取得F1=83.3(AUC=89.5),优于纯Transformer模型和语言学基线模型。MedAlpaca7B的2倍数据增强将F1提升至85.7,但更大规模的增强收益递减。微调显著提升了单模态LLM性能(MedAlpaca7B从F1=47.7提升至78.7),而多模态模型表现较低(Phi-4为71.6;GPT-4o为67.6)。在特拉华语料库上,融合模型结合1倍MedAlpaca7B增强达到F1=72.8(AUC=69.6)。研究表明:整合Transformer与语言学特征能增强ADRD检测能力;基于LLM的数据提升可改善数据效率但存在收益递减现象;当前多模态模型仍存在局限。在独立MCI队列上的验证支持该流程具备可扩展且临床相关的早期筛查潜力。