Focusing on implicit neural representations, we present a novel in situ training protocol that employs limited memory buffers of full and sketched data samples, where the sketched data are leveraged to prevent catastrophic forgetting. The theoretical motivation for our use of sketching as a regularizer is presented via a simple Johnson-Lindenstrauss-informed result. While our methods may be of wider interest in the field of continual learning, we specifically target in situ neural compression using implicit neural representation-based hypernetworks. We evaluate our method on a variety of complex simulation data in two and three dimensions, over long time horizons, and across unstructured grids and non-Cartesian geometries. On these tasks, we show strong reconstruction performance at high compression rates. Most importantly, we demonstrate that sketching enables the presented in situ scheme to approximately match the performance of the equivalent offline method.
翻译:聚焦于隐式神经表示,本文提出一种新颖的原位训练协议,该方法利用有限内存缓冲区中的完整数据样本与草图化数据样本,其中草图化数据被用于防止灾难性遗忘。我们通过一个基于Johnson-Lindenstrauss引理的简化理论结果,阐述了将草图化作为正则化器的理论动机。尽管本方法在持续学习领域可能具有更广泛的应用价值,但我们特别针对基于隐式神经表示的超网络进行原位神经压缩。我们在二维与三维复杂模拟数据上评估了该方法,涵盖长时间跨度、非结构化网格及非笛卡尔几何结构。实验表明,该方法在高压缩率下仍能保持强大的重建性能。最重要的是,我们证明了草图化技术能使所提出的原位训练方案近似达到等效离线方法的性能水平。