Pathology foundation models (FMs) have driven significant progress in computational pathology. However, these high-performing models can easily exceed a billion parameters and produce high-dimensional embeddings, thus limiting their applicability for research or clinical use when computing resources are tight. Here, we introduce Pathryoshka, a multi-teacher distillation framework inspired by RADIO distillation and Matryoshka Representation Learning to reduce pathology FM sizes while allowing for adaptable embedding dimensions. We evaluate our framework with a distilled model on ten public pathology benchmarks with varying downstream tasks. Compared to its much larger teachers, Pathryoshka reduces the model size by 86-92% at on-par performance. It outperforms state-of-the-art single-teacher distillation models of comparable size by a median margin of 7.0 in accuracy. By enabling efficient local deployment without sacrificing accuracy or representational richness, Pathryoshka democratizes access to state-of-the-art pathology FMs for the broader research and clinical community.
翻译:病理学基础模型(FMs)显著推动了计算病理学的发展。然而,这些高性能模型的参数量可轻易超过十亿,并产生高维嵌入,从而在计算资源紧张时限制了其在研究或临床应用中的适用性。本文介绍Pathryoshka,一个受RADIO蒸馏和Matryoshka表示学习启发的多教师蒸馏框架,旨在减小病理学基础模型的规模,同时允许可调节的嵌入维度。我们通过一个蒸馏模型在十个具有不同下游任务的公共病理学基准上评估了该框架。相较于其规模大得多的教师模型,Pathryoshka在性能相当的情况下将模型大小减少了86-92%。它在准确率上以7.0的中位数优势超越了同等规模的先进单教师蒸馏模型。通过在不牺牲准确性或表征丰富性的前提下实现高效的本地部署,Pathryoshka为更广泛的研究和临床社区提供了获取先进病理学基础模型的途径。