Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothesizes that there exist smooth (non-binary) subnetworks within a dense network that achieve the competitive performance of the dense network, we propose a few-shot class incremental learning (FSCIL) method referred to as \emph{Soft-SubNetworks (SoftNet)}. Our objective is to learn a sequence of sessions incrementally, where each session only includes a few training instances per class while preserving the knowledge of the previously learned ones. SoftNet jointly learns the model weights and adaptive non-binary soft masks at a base training session in which each mask consists of the major and minor subnetwork; the former aims to minimize catastrophic forgetting during training, and the latter aims to avoid overfitting to a few samples in each new training session. We provide comprehensive empirical validations demonstrating that our SoftNet effectively tackles the few-shot incremental learning problem by surpassing the performance of state-of-the-art baselines over benchmark datasets.
翻译:受常规的彩票票假说(RLTH)的启发(RLTH),它假设在一个密集的网络中存在着光滑(非二元)的子网络,能够取得密集网络的竞争性性能,我们建议采用称为\emph{Soft-SubNetworks(SoftNet)}的微小类递增学习方法。我们的目标是学习一系列渐进的课程,每个课程只包括每个班的几个培训案例,同时保留以前学到的那些案例的知识。SoftNet在基础培训课上共同学习模型重量和适应性非二元软面具,其中每个面具由主要和次要的子网络组成;前者的目的是尽量减少培训期间的灾难性遗忘,后者的目的是避免在每次新的培训课中过度适应几个样本。我们提供了全面的经验验证,证明我们的软网通过超过基准数据集的先进基准基准值基准值,有效地解决了微小的递增学习问题。</s>