Which generative model is the most suitable for Continual Learning? This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks. We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning. We used two quantitative metrics to estimate the generation quality and memory ability. We experiment with sequential tasks on three commonly used benchmarks for Continual Learning (MNIST, Fashion MNIST and CIFAR10). We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods. Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge. Our code is available online \footnote{\url{https://github.com/TLESORT/Generative\_Continual\_Learning}}.
翻译:哪些基因模型最适合持续学习?本文件旨在评估和比较脱节相继图像生成任务的基因模型;我们调查若干模型如何学习和遗忘,同时考虑到各种战略:排练、正规化、基因重现和微调;我们使用两个定量指标来估计生成质量和记忆能力;我们试验连续学习的三个常用基准(MNIST、时装MNIST和CIFAR10)上的顺序任务;我们发现,在所有模型中,原GAN表现最佳,在连续学习战略中表现最佳,基因重现优于所有其他方法。即使我们在MNIST和时装MNIST上发现令人满意的组合,在CIFAR10上连续培训基因模型特别不易,仍然是一项挑战。我们的代码可以在网上查阅 \footote hul{http://github.com/TLESORT/Generaterativealation ⁇ Continual ⁇ Lination}