Plasticity-stability dilemma is a main problem for incremental learning, with plasticity referring to the ability to learn new knowledge, and stability retaining the knowledge of previous tasks. Due to the lack of training samples from previous tasks, it is hard to balance the plasticity and stability. For example, the recent null-space projection methods (e.g., Adam-NSCL) have shown promising performance on preserving previous knowledge, while such strong projection also causes the performance degradation of the current task. To achieve better plasticity-stability trade-off, in this paper, we show that a simple averaging of two independently optimized optima of networks, null-space projection for past tasks and simple SGD for the current task, can attain a meaningful balance between preserving already learned knowledge and granting sufficient flexibility for learning a new task. This simple linear connector also provides us a new perspective and technology to control the trade-off between plasticity and stability. We evaluate the proposed method on several benchmark datasets. The results indicate our simple method can achieve notable improvement, and perform well on both the past and current tasks. In short, our method is an extremely simple approach and achieves a better balance model.
翻译:可塑性难题是渐进学习的一个主要问题,因为可塑性是指学习新知识的能力,稳定地保留以前任务的知识。由于缺乏以前任务的培训样本,很难平衡可塑性和稳定性。例如,最近的空域投影方法(例如Adam-NSCL)在保存先前知识方面表现良好,而这种强烈的预测也造成当前任务的性能退化。在本文件中,为了实现更好的可塑性权衡,我们显示,简单平均两种独立优化的网络选择,即过去任务的空空空间预测和当前任务的简单 SGD,能够在保存已经学到的知识与为学习新任务提供足够的灵活性之间取得有意义的平衡。这一简单的线性连接器还为我们提供了控制可塑性与稳定性之间的交易的新视角和技术。我们评估了几个基准数据集的拟议方法。结果显示,我们简单的方法可以取得显著的改进,并很好地完成过去和现在的任务。简而言之,我们的方法非常简单,可以实现更好的平衡模式。