Liver cancer is one of the most common cancers worldwide. Due to inconspicuous texture changes of liver tumor, contrast-enhanced computed tomography (CT) imaging is effective for the diagnosis of liver cancer. In this paper, we focus on improving automated liver tumor segmentation by integrating multi-modal CT images. To this end, we propose a novel mutual learning (ML) strategy for effective and robust multi-modal liver tumor segmentation. Different from existing multi-modal methods that fuse information from different modalities by a single model, with ML, an ensemble of modality-specific models learn collaboratively and teach each other to distill both the characteristics and the commonality between high-level representations of different modalities. The proposed ML not only enables the superiority for multi-modal learning but can also handle missing modalities by transferring knowledge from existing modalities to missing ones. Additionally, we present a modality-aware (MA) module, where the modality-specific models are interconnected and calibrated with attention weights for adaptive information exchange. The proposed modality-aware mutual learning (MAML) method achieves promising results for liver tumor segmentation on a large-scale clinical dataset. Moreover, we show the efficacy and robustness of MAML for handling missing modalities on both the liver tumor and public brain tumor (BRATS 2018) datasets. Our code is available at https://github.com/YaoZhang93/MAML.


翻译:肝癌是全世界最常见的癌症之一。由于肝肿瘤的不明显质变,对比强化计算断层成像(CT)成像对肝癌的诊断有效。在本文件中,我们侧重于通过整合多模式CT图像来改善自动肝肿瘤分化。为此,我们提出一个新的相互学习(ML)战略,以有效和稳健的多模式肝肿瘤分解。不同于现有的多种模式方法,这些方法通过单一模式将不同模式的信息与不同模式的信息结合在一起,而ML是特定模式模型的共通体,相互学习,并相互教导对方如何淡化不同模式的高层次表现的特征和共性。拟议的ML不仅能够使多模式的CT图像相结合,而且能够通过将现有模式的知识转让给缺失的模式来处理缺失的模式。此外,我们介绍了一个模式-觉悟(MA)模块,其中具体模式模型与适应性信息交流的重心重结合。拟议模式-意识模型的相互学习方法(MAML)共同学习了不同模式的特征和共性特征,同时淡化了不同模式的特性特征。 IMLA系统运行了大规模磁带数据。

0
下载
关闭预览

相关内容

MAML(Model-Agnostic Meta-Learning)是元学习(Meta learning)最经典的几个算法之一,出自论文《Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks》。 原文地址:https://arxiv.org/abs/1703.03400
零样本文本分类,Zero-Shot Learning for Text Classification
专知会员服务
95+阅读 · 2020年5月31日
【Google】监督对比学习,Supervised Contrastive Learning
专知会员服务
72+阅读 · 2020年4月24日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
161+阅读 · 2020年3月18日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
26+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
Zero-Shot Learning相关资源大列表
专知
52+阅读 · 2019年1月1日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
Deep learning for cardiac image segmentation: A review
Arxiv
21+阅读 · 2019年11月9日
Deep Co-Training for Semi-Supervised Image Segmentation
VIP会员
Top
微信扫码咨询专知VIP会员