Breast cancer is the most prevalent cancer worldwide and over two million new cases are diagnosed each year. As part of the tumour grading process, histopathologists manually count how many cells are dividing, in a biological process called mitosis. Artificial intelligence (AI) methods have been developed to automatically detect mitotic figures, however these methods often perform poorly when applied to data from outside of the original (training) domain, i.e. they do not generalise well to histology images created using varied staining protocols or digitised using different scanners. Style transfer, a form of domain adaptation, provides the means to transform images from different domains to a shared visual appearance and have been adopted in various applications to mitigate the issue of domain shift. In this paper we train two mitosis detection models and two style transfer methods and evaluate the usefulness of the latter for improving mitosis detection performance in images digitised using different scanners. We found that the best of these models, U-Net without style transfer, achieved an F1-score of 0.693 on the MIDOG 2021 preliminary test set.
翻译:乳腺癌是全世界最普遍的癌症,每年诊断出200多万个新病例。作为肿瘤分级过程的一部分,组织病理学家人工计算了多少细胞在被称作“分裂”的生物过程中分裂。人工智能(AI)方法已经开发出来,可以自动检测线状数字,但是当这些方法应用于原始(培训)领域以外的数据时,效果往往很差,因为这些方法没有很好地概括利用不同污点规程或不同扫描仪数字化生成的病理图象。样式转换是一种领域适应形式,它提供了将图像从不同领域转换为共同视觉外观的手段,并被应用于各种应用中,以缓解领域转移问题。在本文中,我们培训了两种线状硬化检测模型和两种风格传输方法,并评估后者对利用不同扫描仪改进图像数字化图像的松化检测性能的有用性。我们发现,这些模型中最优秀的U-网络,没有风格传输,在MIDOG 2021初步测试集上实现了F1-993的F-芯。