Remarkable success of modern image-based AI methods and the resulting interest in their applications in critical decision-making processes has led to a surge in efforts to make such intelligent systems transparent and explainable. The need for explainable AI does not stem only from ethical and moral grounds but also from stricter legislation around the world mandating clear and justifiable explanations of any decision taken or assisted by AI. Especially in the medical context where Computer-Aided Diagnosis can have a direct influence on the treatment and well-being of patients, transparency is of utmost importance for safe transition from lab research to real world clinical practice. This paper provides a comprehensive overview of current state-of-the-art in explaining and interpreting Deep Learning based algorithms in applications of medical research and diagnosis of diseases. We discuss early achievements in development of explainable AI for validation of known disease criteria, exploration of new potential biomarkers, as well as methods for the subsequent correction of AI models. Various explanation methods like visual, textual, post-hoc, ante-hoc, local and global have been thoroughly and critically analyzed. Subsequently, we also highlight some of the remaining challenges that stand in the way of practical applications of AI as a clinical decision support tool and provide recommendations for the direction of future research.
翻译:现代基于图像的AI方法取得了显著的成功,并因此对在关键决策进程中应用这些方法产生了兴趣,因此,使这种智能系统具有透明度和可解释性的努力激增,因此,需要解释性AI不仅需要伦理和道德理由,而且需要世界各地更严格的立法,要求明确和合理解释AI作出或协助作出的任何决定。 特别是在计算机辅助诊断能够直接影响病人的治疗和福祉的医疗方面,透明度对于从实验室研究安全过渡到现实世界临床实践至关重要。本文件全面概述了在解释和解释医学研究和疾病诊断应用中的深学习算法方面目前的最新状况。我们讨论了在为鉴定已知疾病标准、探索新的潜在生物标志以及随后对AI模型进行修正而制定可解释性AI方法方面早期取得的成就。各种解释方法,如视觉、文字、后热解、前热解、地方和全球临床实践实践等,都得到了彻底和严谨的分析。随后,我们还强调了在实际研究应用AI这一工具的道路上仍然存在的一些挑战。