To reconstruct a 3D scene from a set of calibrated views, traditional multi-view stereo techniques rely on two distinct stages: local depth maps computation and global depth maps fusion. Recent studies concentrate on deep neural architectures for depth estimation by using conventional depth fusion method or direct 3D reconstruction network by regressing Truncated Signed Distance Function (TSDF). In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results. As mentioned, our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume. In order to improve the matching performance between images acquired from very different viewpoints (e.g., large-baseline and rotations), we introduce a rotation-invariant 3D convolution kernel called PosedConv. The effectiveness of the proposed architecture is underlined via a large series of experiments conducted on the ScanNet dataset where our approach compares favorably against both traditional and deep learning techniques.
翻译:为了从一组校准观点重建三维场景,传统的多视立体技术依赖于两个截然不同的阶段:地方深度地图计算和全球深度地图融合。最近的研究集中于深神经结构,通过使用传统的深度融合方法或直接的三维重建网络,通过倒退的连接远程函数(TSDF)来进行深度估算。在本文中,我们主张利用深神经网络复制传统的两阶段框架,既可以解释,又可以准确得出结果。如上所述,我们的网络分两步运行:1)用深MVS技术对当地深度地图进行本地计算,2)用深度地图和图像特征融合,以构建单一的TSDF卷。为了改进从非常不同角度获得的图像(例如,大基线和旋转)之间的匹配性能,我们采用了一个旋转式的三维共生核心网络称为Posed Conv。通过在扫描网络数据集上进行的大量实验,我们的方法与传统和深层学习技术相对优异,从而强调了拟议结构的有效性。