We introduce Cupid, a generative 3D reconstruction framework that jointly models the full distribution over both canonical objects and camera poses. Our two-stage flow-based model first generates a coarse 3D structure and 2D-3D correspondences to estimate the camera pose robustly. Conditioned on this pose, a refinement stage injects pixel-aligned image features directly into the generative process, marrying the rich prior of a generative model with the geometric fidelity of reconstruction. This strategy achieves exceptional faithfulness, outperforming state-of-the-art reconstruction methods by over 3 dB PSNR and 10% in Chamfer Distance. As a unified generative model that decouples the object and camera pose, Cupid naturally extends to multi-view and scene-level reconstruction tasks without requiring post-hoc optimization or fine-tuning.
翻译:我们提出了Cupid,一种生成式三维重建框架,能够联合建模规范对象和相机姿态的完整分布。我们的两阶段基于流的模型首先生成粗糙的三维结构和2D-3D对应关系,以鲁棒地估计相机姿态。在此姿态条件下,细化阶段将像素对齐的图像特征直接注入生成过程,将生成模型的丰富先验与重建的几何保真度相结合。该策略实现了卓越的保真度,在峰值信噪比(PSNR)上超过现有最先进的重建方法3 dB以上,在倒角距离(Chamfer Distance)上提升超过10%。作为一个解耦对象和相机姿态的统一生成模型,Cupid无需后优化或微调即可自然扩展到多视角和场景级重建任务。