We present a novel training approach, named Merge-and-Bound (M&B) for Class Incremental Learning (CIL), which directly manipulates model weights in the parameter space for optimization. Our algorithm involves two types of weight merging: inter-task weight merging and intra-task weight merging. Inter-task weight merging unifies previous models by averaging the weights of models from all previous stages. On the other hand, intra-task weight merging facilitates the learning of current task by combining the model parameters within current stage. For reliable weight merging, we also propose a bounded update technique that aims to optimize the target model with minimal cumulative updates and preserve knowledge from previous tasks; this strategy reveals that it is possible to effectively obtain new models near old ones, reducing catastrophic forgetting. M&B is seamlessly integrated into existing CIL methods without modifying architecture components or revising learning objectives. We extensively evaluate our algorithm on standard CIL benchmarks and demonstrate superior performance compared to state-of-the-art methods.
翻译:我们提出了一种名为合并与边界(Merge-and-Bound,M&B)的新型训练方法,用于类增量学习(Class Incremental Learning,CIL),该方法直接在参数空间中对模型权重进行操作以实现优化。我们的算法涉及两种权重合并类型:任务间权重合并与任务内权重合并。任务间权重合并通过对所有先前阶段模型的权重进行平均,统一了先前模型。另一方面,任务内权重合并通过组合当前阶段内的模型参数,促进了当前任务的学习。为实现可靠的权重合并,我们还提出了一种边界更新技术,旨在以最小累积更新优化目标模型,并保留先前任务的知识;该策略表明,有效获取接近旧模型的新模型是可行的,从而减轻灾难性遗忘。M&B可无缝集成到现有CIL方法中,无需修改架构组件或调整学习目标。我们在标准CIL基准上广泛评估了该算法,并展示了相较于最先进方法的优越性能。