Semantic segmentation under domain shift remains a fundamental challenge in computer vision, particularly when labelled training data is scarce. This challenge is particularly exemplified in histopathology image analysis, where the same tissue structures must be segmented across images captured under different imaging conditions (stains), each representing a distinct visual domain. Traditional deep learning methods like UNet require extensive labels, which is both costly and time-consuming, particularly when dealing with multiple domains (or stains). To mitigate this, various unsupervised domain adaptation based methods such as UDAGAN have been proposed, which reduce the need for labels by requiring only one (source) stain to be labelled. Nonetheless, obtaining source stain labels can still be challenging. This article shows that through self-supervised pre-training -- including SimCLR, BYOL, and a novel approach, HR-CS-CO -- the performance of these segmentation methods (UNet, and UDAGAN) can be retained even with 95% fewer labels. Notably, with self-supervised pre-training and using only 5% labels, the performance drops are minimal: 5.9% for UNet and 6.2% for UDAGAN, averaged over all stains, compared to their respective fully supervised counterparts (without pre-training, using 100% labels). Furthermore, these findings are shown to generalise beyond their training distribution to public benchmark datasets. Implementations and pre-trained models are publicly available \href{https://github.com/zeeshannisar/resource-effecient-multi-stain-kidney-glomeruli-segmentation.git}{online}.
翻译:领域偏移下的语义分割仍然是计算机视觉中的一个基本挑战,尤其是在标注训练数据稀缺的情况下。这一挑战在组织病理学图像分析中尤为突出,其中相同的组织结构需要在不同成像条件(染色)下捕获的图像中进行分割,每种条件代表一个独特的视觉域。传统的深度学习方法如UNet需要大量标注,这既昂贵又耗时,尤其是在处理多个域(或染色)时。为了缓解这一问题,已提出了各种基于无监督域自适应的方法,例如UDAGAN,这些方法通过仅需标注一个(源)染色来减少对标注的需求。然而,获取源染色标注仍然具有挑战性。本文表明,通过自监督预训练——包括SimCLR、BYOL以及一种新方法HR-CS-CO——即使标注减少95%,这些分割方法(UNet和UDAGAN)的性能仍可保持。值得注意的是,使用自监督预训练且仅需5%标注时,性能下降极小:与各自完全监督的对应方法(无预训练,使用100%标注)相比,在所有染色上平均,UNet的性能下降为5.9%,UDAGAN为6.2%。此外,这些发现被证明能够泛化到训练分布之外的公共基准数据集。实现代码和预训练模型已公开在线提供。