Deep learning is increasingly gaining rapid adoption in healthcare to help improve patient outcomes. This is more so in medical image analysis which requires extensive training to gain the requisite expertise to become a trusted practitioner. However, while deep learning techniques have continued to provide state-of-the-art predictive performance, one of the primary challenges that stands to hinder this progress in healthcare is the opaque nature of the inference mechanism of these models. So, attribution has a vital role in building confidence in stakeholders for the predictions made by deep learning models to inform clinical decisions. This work seeks to answer the question: what do deep neural network models learn in medical images? In that light, we present a novel attribution framework using adaptive path-based gradient integration techniques. Results show a promising direction of building trust in domain experts to improve healthcare outcomes by allowing them to understand the input-prediction correlative structures, discover new bio-markers, and reveal potential model biases.
翻译:深入学习在医疗领域日益迅速被采纳,以帮助改善患者的治疗结果。在医学形象分析方面尤其如此,医学形象分析需要广泛培训,以获得必要的专业知识,成为受信任的执业医师。然而,深层次学习技术继续提供最先进的预测性表现,阻碍医疗保健进步的主要挑战之一是这些模式的推断机制的不透明性。因此,归属在建立利益攸关方对深层次学习模型预测的信任以指导临床决策方面发挥着至关重要的作用。 这项工作旨在解答一个问题:深神经网络模型在医疗图像中学习什么? 从这个角度讲,我们提出了一个新的归属框架,使用适应性路径梯度整合技术。结果显示在领域专家中建立信任的有希望的方向,通过让他们理解投入-定位相关结构、发现新的生物标志和揭示潜在的模型偏差来改善医疗保健成果。