The new era of image segmentation leveraging the power of Deep Neural Nets (DNNs) comes with a price tag: to train a neural network for pixel-wise segmentation, a large amount of training samples has to be manually labeled on pixel-precision. In this work, we address this by following an indirect solution. We build upon the advances of the Explainable AI (XAI) community and extract a pixel-wise binary segmentation from the output of the Layer-wise Relevance Propagation (LRP) explaining the decision of a classification network. We show that we achieve similar results compared to an established U-Net segmentation architecture, while the generation of the training data is significantly simplified. The proposed method can be trained in a weakly supervised fashion, as the training samples must be only labeled on image-level, at the same time enabling the output of a segmentation mask. This makes it especially applicable to a wider range of real applications where tedious pixel-level labelling is often not possible.
翻译:利用深神经网(DNNS)的能量进行图像分割的新时代带有一个价格标签:为像素分割训练神经网络,大量培训样本必须手工贴在像素精度标签上。在这项工作中,我们通过间接解决方案解决这个问题。我们利用可解释的 AI (XAI) 社区的进步,并从图层- 源性相关性促进(LRP) 的产出中提取一个像素的二进制分解,解释分类网络的决定。我们显示,我们与已经建立的 U- 网络分割结构相比,我们取得了类似的结果,而培训数据的生成则大大简化了。提议的方法可以以薄弱的监管方式加以培训,因为培训样本必须只贴在图层上标签,同时允许分解面面面面面的输出。这使得它特别适用于范围更广的实际应用,因为那里通常无法进行潮湿的像素等级标签。