Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge. Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity). However, the stability-plasticity trade-off during continual learning may need to be dynamically changed for better model performance. In this paper, we propose two novel ways to adaptively balance model stability and plasticity. The first one is to adaptively integrate multiple levels of old knowledge and transfer it to each block level in the new model. The second one uses prediction uncertainty of old knowledge to naturally tune the importance of learning new knowledge during model training. To our best knowledge, this is the first time to connect model prediction uncertainty and knowledge distillation for continual learning. In addition, this paper applies a modified CutMix particularly to augment the data for old knowledge, further alleviating the catastrophic forgetting issue. Extensive evaluations on the CIFAR100 and the ImageNet datasets confirmed the effectiveness of the proposed method for continual learning.
翻译:目前深层次的学习模式在不断学习新知识时,往往被灾难性地忘记了旧知识; 缓解这一问题的现有战略往往在保留旧知识(稳定)和学习新知识(塑料)之间找到平衡点; 然而,在继续学习期间,稳定-塑料交换需要动态地改变,以便更好的模型性能; 在本文件中,我们提出了两种适应性平衡模型稳定性和可塑性的新方法; 第一个是适应性地整合多种水平的旧知识,并将其传输到新模式的每个阶段一级; 第二个是预测老知识的不确定性,以自然调节在模式培训期间学习新知识的重要性; 对于我们的最佳知识而言,这是第一次将模型预测不确定性和知识蒸馏联系起来,以便继续学习。 此外,本文还应用了经修改的CutMix, 特别是用来增加老知识数据,进一步减轻灾难性的遗忘问题。 对CIFAR100和图像网络数据集的广泛评价证实了拟议的持续学习方法的有效性。