We study incremental learning for semantic segmentation where when learning new classes we have no access to the labeled data of previous tasks. When incrementally learning new classes, deep neural networks suffer from catastrophic forgetting of previous learned knowledge. To address this problem, we propose to apply a self-training approach that leverages unlabeled data, which is used for rehearsal of previous knowledge. Additionally, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the old and new models. We show that maximizing self-entropy can further improve results by smoothing the overconfident predictions. The experiments demonstrate state-of-the-art results: obtaining a relative gain of up to 114% on Pascal-VOC 2012 and 8.5% on the more challenging ADE20K compared to previous state-of-the-art methods.
翻译:在学习新课程时,我们无法获取先前任务的标签数据。在学习新课程时,深神经网络会因灾难性地忘记以前学到的知识而受害。为了解决这一问题,我们提议采用自我培训方法,利用未标记的数据,用于对以前的知识进行排练。此外,还提议减少冲突,以解决由旧模式和新模式产生的假标签冲突。我们显示,通过平滑过度自信的预测,最大限度的自我消耗能进一步改善结果。实验显示了最新的结果:在帕斯卡尔-VOC2012年获得高达114%的相对收益,在更具挑战性的ADE20K与以前最先进的方法相比获得8.5%的相对收益。