Deep learning has achieved tremendous success in computer vision, while medical image segmentation (MIS) remains a challenge, due to the scarcity of data annotations. Meta-learning techniques for few-shot segmentation (Meta-FSS) have been widely used to tackle this challenge, while they neglect possible distribution shifts between the query image and the support set. In contrast, an experienced clinician can perceive and address such shifts by borrowing information from the query image, then fine-tune or calibrate his (her) prior cognitive model accordingly. Inspired by this, we propose Q-Net, a Query-informed Meta-FSS approach, which mimics in spirit the learning mechanism of an expert clinician. We build Q-Net based on ADNet, a recently proposed anomaly detection-inspired method. Specifically, we add two query-informed computation modules into ADNet, namely a query-informed threshold adaptation module and a query-informed prototype refinement module. Combining them with a dual-path extension of the feature extraction module, Q-Net achieves state-of-the-art performance on two widely used datasets, which are composed of abdominal MR images and cardiac MR images, respectively. Our work sheds light on a novel way to improve Meta-FSS techniques by leveraging query information.
翻译:深层学习在计算机视觉方面取得了巨大成功,而医疗图像分割(MIS)由于数据说明的缺乏,仍然是一个挑战。微小截分(Meta-FSS)的元学习技术被广泛用于应对这一挑战,而它们忽视了查询图像与成套支持之间可能的分布变化。相比之下,有经验的临床医生可以通过从查询图像中借用信息,然后微调或校准他(她)先前的认知模型来感知和应对这种转变。我们为此建议Q-Net,即一个查询知情的Meta-FSS方法,即以精神模仿专家临床医生学习机制的虚拟Meta-FSS方法。我们根据ADNet(最近提出的异常检测激励方法)建立Q-Net。具体地说,我们把两个自问的计算模块加进ADNet,即一个自答的门槛调整模块和一个自问的原型模型改进模块。根据这个模块,我们建议Q-Net在两种广泛使用的数据数据集上实现最先进的状态性功能性功能性功能性功能性功能性功能。