Continual learning has been extensively studied for classification tasks with methods developed to primarily avoid catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we present a set of continual 3D object shape reconstruction tasks, including complete 3D shape reconstruction from different input modalities, as well as visible surface (2.5D) reconstruction which, surprisingly demonstrate positive knowledge (backward and forward) transfer when training with solely standard SGD and without additional heuristics. We provide evidence that continuously updated representation learning of single-view 3D shape reconstruction improves the performance on learned and novel categories over time. We provide a novel analysis of knowledge transfer ability by looking at the output distribution shift across sequential learning tasks. Finally, we show that the robustness of these tasks leads to the potential of having a proxy representation learning task for continual classification. The codebase, dataset and pre-trained models released with this article can be found at https://github.com/rehg-lab/CLRec
翻译:在这项工作中,我们提出了一套连续的三维对象形状重建任务,包括从不同投入模式中完成的三维形状重建,以及可见表面(2.5D)重建,令人惊讶地表明,在仅使用标准 SGD 和没有额外超常学的培训中,有积极的知识(后向和前向)转移。我们提供了证据,证明不断更新单一视图 3D 形状重建的代表性学习提高了学习和新颖类别的业绩。我们通过审视连续学习任务的产出分布变化,对知识转让能力进行了新颖的分析。最后,我们表明,这些任务的稳健性导致有潜力为持续分类提供代言式学习任务。用这一文章发布的代码库、数据集和预先培训模型可在https://github.com/rehg-lab/CLRec查阅。