We propose 4DGT, a 4D Gaussian-based Transformer model for dynamic scene reconstruction, trained entirely on real-world monocular posed videos. Using 4D Gaussian as an inductive bias, 4DGT unifies static and dynamic components, enabling the modeling of complex, time-varying environments with varying object lifespans. We proposed a novel density control strategy in training, which enables our 4DGT to handle longer space-time input and remain efficient rendering at runtime. Our model processes 64 consecutive posed frames in a rolling-window fashion, predicting consistent 4D Gaussians in the scene. Unlike optimization-based methods, 4DGT performs purely feed-forward inference, reducing reconstruction time from hours to seconds and scaling effectively to long video sequences. Trained only on large-scale monocular posed video datasets, 4DGT can outperform prior Gaussian-based networks significantly in real-world videos and achieve on-par accuracy with optimization-based methods on cross-domain videos. Project page: https://4dgt.github.io
翻译:我们提出了4DGT,一种基于四维高斯的变换器模型,用于动态场景重建,完全在真实世界单目位姿视频上训练。利用四维高斯作为归纳偏置,4DGT统一了静态和动态组件,能够建模具有不同物体生命周期的复杂时变环境。我们在训练中提出了一种新颖的密度控制策略,使4DGT能够处理更长的时空输入,并在运行时保持高效渲染。我们的模型以滚动窗口方式处理64个连续位姿帧,预测场景中一致的四维高斯分布。与基于优化的方法不同,4DGT执行纯前馈推理,将重建时间从数小时缩短至数秒,并能有效扩展到长视频序列。仅在大规模单目位姿视频数据集上训练,4DGT在真实世界视频中显著优于先前基于高斯的方法,并在跨域视频上达到与基于优化方法相当的精度。项目页面:https://4dgt.github.io