Mapping systems with novel view synthesis (NVS) capabilities, most notably 3D Gaussian Splatting (3DGS), are widely used in computer vision and across various applications, including augmented reality, robotics, and autonomous driving. However, many current approaches are limited to static scenes. While recent works have begun addressing short-term dynamics (motion within the camera's view), long-term dynamics (the scene evolving through changes out of view) remain less explored. To overcome this limitation, we introduce a dynamic scene-adaptation mechanism that continuously updates 3DGS to reflect the latest changes. Since maintaining consistency remains challenging due to stale observations that disrupt the reconstruction process, we propose a novel keyframe management mechanism that discards outdated observations while preserving as much information as possible. We thoroughly evaluate Gaussian Mapping for Evolving Scenes (\ours) on both synthetic and real-world datasets, achieving a 29.7\% improvement in PSNR and a 3 times improvement in L1 depth error over the most competitive baseline.
翻译:具备新视角合成能力(NVS)的映射系统,尤其是3D高斯泼溅(3DGS),在计算机视觉及增强现实、机器人学和自动驾驶等众多应用领域得到广泛应用。然而,现有方法大多局限于静态场景。尽管近期研究已开始处理短期动态(相机视野内的运动),但长期动态(场景在视野外发生演化的变化)仍较少被探索。为突破这一局限,我们引入了一种动态场景自适应机制,可持续更新3DGS以反映最新变化。由于陈旧观测数据会干扰重建过程,保持一致性仍具挑战性,为此我们提出了一种新颖的关键帧管理机制,在尽可能保留信息的同时剔除过时观测。我们在合成数据集和真实世界数据集上全面评估了用于演化场景的高斯映射方法(\\ours),其峰值信噪比(PSNR)较最具竞争力的基线提升29.7%,L1深度误差改善达3倍。