Class incremental learning(CIL) has attracted much attention, but most existing related works focus on fine-tuning the entire representation model, which inevitably results in much catastrophic forgetting. In the contrast, with a semantic-rich pre-trained representation model, parameter-additional-tuning (PAT) only changes very few parameters to learn new visual concepts. Recent studies have proved that PAT-based CIL can naturally avoid fighting against forgetting by replaying or distilling like most of the existing methods. However, we find that PAT-based CIL still faces serious semantic drift, the high-level forgetting problem caused by classifier learning bias at different learning phases, which significantly reduces the performance of PAT-based CIL. To address this problem, we propose Incremental Prototype Tuning (IPT), a simple but effective method that tunes category prototypes for classification and learning example prototypes to compensate for semantic drift. Extensive experiments demonstrate that our method can effectively compensate for semantic drift. Combined with well-pre-trained Vit backbones and other PAT methods, IPT surpasses the state-of-the-art baselines on mainstream incremental learning benchmarks.
翻译:类类递增学习( CIL) 吸引了很多注意力, 但大部分现有相关工作都侧重于微调整个代表模式, 这不可避免地导致灾难性的遗忘。 相反, 语义上丰富的经培训前代表模式( PAT) 参数额外调整( PAT) 只能改变很少的参数来学习新的视觉概念。 最近的研究证明, 基于 PAT 的 CIL 自然可以避免通过重放或像大多数现有方法一样蒸馏而忘记。 然而, 我们发现, 以 PAT 为基础的 CIL 仍然面临着严重的语义流, 高层次忘记了不同学习阶段的分类者学习偏向造成的问题, 从而大大降低了基于 PAT 的 CIL 的绩效。 为了解决这个问题, 我们提议了递增 Prototype tuning ( IPT), 这是一种简单而有效的方法, 调用分类的分类原型和学习样来补偿语义流。 广泛的实验表明, 我们的方法可以有效地补偿语义流。 与经过良好训练的 Vit 脊椎和其他 PAT 方法相结合,, IPT 超越了主流学习基准 。