Reconstructing 3D objects is an important computer vision task that has wide application in AR/VR. Deep learning algorithm developed for this task usually relies on an unrealistic synthetic dataset, such as ShapeNet and Things3D. On the other hand, existing real-captured object-centric datasets usually do not have enough annotation to enable supervised training or reliable evaluation. In this technical report, we present a photo-realistic object-centric dataset HM3D-ABO. It is constructed by composing realistic indoor scene and realistic object. For each configuration, we provide multi-view RGB observations, a water-tight mesh model for the object, ground truth depth map and object mask. The proposed dataset could also be useful for tasks such as camera pose estimation and novel-view synthesis. The dataset generation code is released at https://github.com/zhenpeiyang/HM3D-ABO.
翻译:重建 3D 对象是一项重要的计算机愿景任务,在AR/VR 中广泛应用。 为这项任务开发的深学习算法通常依赖于不切实际的合成数据集,如形状网和Thims3D。 另一方面,现有的实际捕获的以物体为中心的数据集通常没有足够的注释,无法进行有监督的培训或可靠的评估。在本技术报告中,我们提出了一个照片-现实的以物体为中心的数据集 HM3D-ABO。它是通过配置现实的室内场景和现实的天体来构建的。我们为每个配置提供了多视图 RGB 观测,一个为对象、地面深度地图和对象掩体提供水透光网模型。提议的数据集对于相机构成估计和新视角合成等任务也可能有用。数据集生成代码在 https://github.com/zenpeiyang/HM3D-ABO中发布。