Existing learning-based solutions to medical image segmentation have two important shortcomings. First, for most new segmentation task, a new model has to be trained or fine-tuned. This requires extensive resources and machine learning expertise, and is therefore often infeasible for medical researchers and clinicians. Second, most existing segmentation methods produce a single deterministic segmentation mask for a given image. In practice however, there is often considerable uncertainty about what constitutes the correct segmentation, and different expert annotators will often segment the same image differently. We tackle both of these problems with Tyche, a model that uses a context set to generate stochastic predictions for previously unseen tasks without the need to retrain. Tyche differs from other in-context segmentation methods in two important ways. (1) We introduce a novel convolution block architecture that enables interactions among predictions. (2) We introduce in-context test-time augmentation, a new mechanism to provide prediction stochasticity. When combined with appropriate model design and loss functions, Tyche can predict a set of plausible diverse segmentation candidates for new or unseen medical images and segmentation tasks without the need to retrain.
翻译:现有的基于学习的医学图像分割方法存在两个重要缺陷。首先,对于大多数新的分割任务,必须训练或微调新模型。这需要大量资源和机器学习专业知识,因此对医学研究人员和临床医生而言通常不可行。其次,大多数现有分割方法对给定图像仅生成单一确定性分割掩码。然而在实践中,关于何为正确分割常存在显著不确定性,不同专家标注者对同一图像的分割结果往往存在差异。我们通过Tyche模型同时解决这两个问题,该模型利用上下文集为未见任务生成随机预测,无需重新训练。Tyche与其他上下文分割方法的区别主要体现在两个方面:(1)我们提出一种新颖的卷积块架构,可实现预测间的交互作用;(2)我们引入上下文测试时增强机制,这是一种提供预测随机性的新方法。结合适当的模型设计与损失函数,Tyche能够为新的或未见过的医学图像及分割任务预测一组合理且多样化的分割候选结果,且无需重新训练模型。