An assumption-free automatic check of medical images for potentially overseen anomalies would be a valuable assistance for a radiologist. Deep learning and especially Variational Auto-Encoders (VAEs) have shown great potential in the unsupervised learning of data distributions. In principle, this allows for such a check and even the localization of parts in the image that are most suspicious. Currently, however, the reconstruction-based localization by design requires adjusting the model architecture to the specific problem looked at during evaluation. This contradicts the principle of building assumption-free models. We propose complementing the localization part with a term derived from the Kullback-Leibler (KL)-divergence. For validation, we perform a series of experiments on FashionMNIST as well as on a medical task including >1000 healthy and >250 brain tumor patients. Results show that the proposed formalism outperforms the state of the art VAE-based localization of anomalies across many hyperparameter settings and also shows a competitive max performance.
翻译:深度学习,特别是变式自动计算器(VAE)在无监督的数据分布学习中显示出巨大的潜力。原则上,这允许对图像中最可疑的部分进行这种检查,甚至将其本地化。然而,目前,以重建为基础的本地化设计需要调整模型结构以适应评估期间所审查的具体问题。这与建设无假设模型的原则相矛盾。我们提议用由Kullback-Leibel (KL)-diver (KL)-diverence(VAE) 衍生的术语来补充本地化部分。为了验证,我们进行了一系列关于时装MNIST(F) 以及医疗任务(包括 > 1 000 健康 和 > 250 脑肿瘤病人) 的实验。结果显示,拟议的正规化超越了基于现代VAE的异常在很多超度环境中的本地化状态,也显示了竞争性最大性表现。