One of the key differences between the learning mechanism of humans and Artificial Neural Networks (ANNs) is the ability of humans to learn one task at a time. ANNs, on the other hand, can only learn multiple tasks simultaneously. Any attempts at learning new tasks incrementally cause them to completely forget about previous tasks. This lack of ability to learn incrementally, called Catastrophic Forgetting, is considered a major hurdle in building a true AI system. In this paper, our goal is to isolate the truly effective existing ideas for incremental learning from those that only work under certain conditions. To this end, we first thoroughly analyze the current state of the art (iCaRL) method for incremental learning and demonstrate that the good performance of the system is not because of the reasons presented in the existing literature. We conclude that the success of iCaRL is primarily due to knowledge distillation and recognize a key limitation of knowledge distillation, i.e, it often leads to bias in classifiers. Finally, we propose a dynamic threshold moving algorithm that is able to successfully remove this bias. We demonstrate the effectiveness of our algorithm on CIFAR100 and MNIST datasets showing near-optimal results. Our implementation is available at https://github.com/Khurramjaved96/incremental-learning.
翻译:人类和人工神经网络(ANNs)的学习机制与人工神经网络(ANNs)之间的关键差异之一是人类一次学习一项任务的能力。另一方面,ANNs只能同时学习多重任务。任何学习新任务的尝试,都会逐渐导致他们完全忘记以前的任务。这种缺乏逐步学习的能力,称为灾难式的遗忘,被认为是建立真正的AI系统的主要障碍。在本文中,我们的目标是将真正有效的现有渐进学习理念与仅在某些条件下才起作用的理念区分开来。为此,我们首先彻底分析目前艺术(iCaRL)的渐进学习方法的状况,并证明系统的良好运行并非因为现有文献中提出的理由。我们的结论是,iCaRCL的成功主要由于知识的提炼和承认知识蒸馏的关键局限性,即常常导致分类中的偏差。最后,我们建议了能够成功消除这种偏差的动态阈值。我们展示了我们在 CIFAR/MIS/MISC/MISC上的现有算法的有效性。