在计算机视觉中, 三维重建是指根据单视图或者多视图的图像重建三维信息的过程. 由于单视频的信息不完全,因此三维重建需要利用经验知识. 而多视图的三维重建(类似人的双目定位)相对比较容易, 其方法是先对摄像机进行标定, 即计算出摄像机的图象坐标系与世界坐标系的关系.然后利用多个二维图象中的信息重建出三维信息。 物体三维重建是计算机辅助几何设计(CAGD)、计算机图形学(CG)、计算机动画、计算机视觉、医学图像处理、科学计算和虚拟现实、数字媒体创作等领域的共性科学问题和核心技术。在计算机内生成物体三维表示主要有两类方法。一类是使用几何建模软件通过人机交互生成人为控制下的物体三维几何模型,另一类是通过一定的手段获取真实物体的几何形状。前者实现技术已经十分成熟,现有若干软件支持,比如:3DMAX、Maya、AutoCAD、UG等等,它们一般使用具有数学表达式的曲线曲面表示几何形状。后者一般称为三维重建过程,三维重建是指利用二维投影恢复物体三维信息(形状等)的数学过程和计算机技术,包括数据获取、预处理、点云拼接和特征分析等步骤。

VIP内容

基于图像的三维重建,旨在从一组二维多视角图像精确地恢复真实场景的几何形状,是计算机视觉和摄影测量中一个基础且活跃的研究领域,具有重要的理论研究意义和应用价值,在智慧城市、虚拟旅游、数字遗产保护、数字地图和导航等领域有着广泛的应用。近年来,随着图像采集系统(包括智能手机、消费级数码相机、民用无人机)的普及和互联网的高速发展,用户可以通过搜索引擎(例如谷歌)轻松获取大量的关于某个室外场景的互联网图像。如何利用这些图像进行高效、鲁棒、准确的三维重建,为用户提供真实感知和沉浸式体验,已经成为研究热点,引发了学术界和产业界的广泛关注,现已涌现多种多样的解决方法。特别地,深度学习的出现为大规模室外图像三维重建的研究提供了新的契机。本文首先阐述大规模室外图像三维重建的基本串行过程,包括图像检索、图像特征点匹配、运动恢复结构、多视图立体。然后,本文将区分传统方法和基于深度学习的方法,系统而全面地回顾大规模室外图像三维重建技术在各个重建子过程中的发展和应用。之后,本文详细总结各个子过程中适用于大规模室外场景的数据集和评价指标。最后,本文将介绍现有主流的开源和商业三维重建系统以及国内相关产业的发展现状。

http://www.cjig.cn/jig/ch/reader/view_abstract.aspx?flag=2&file_no=202012270000001&journal_id=jig

成为VIP会员查看完整内容
0
14

最新内容

X-ray diffraction based microscopy techniques such as High Energy Diffraction Microscopy rely on knowledge of the position of diffraction peaks with high precision. These positions are typically computed by fitting the observed intensities in area detector data to a theoretical peak shape such as pseudo-Voigt. As experiments become more complex and detector technologies evolve, the computational cost of such peak detection and shape fitting becomes the biggest hurdle to the rapid analysis required for real-time feedback during in-situ experiments. To this end, we propose BraggNN, a deep learning-based method that can determine peak positions much more rapidly than conventional pseudo-Voigt peak fitting. When applied to a test dataset, BraggNN gives errors of less than 0.29 and 0.57 pixels, relative to the conventional method, for 75% and 95% of the peaks, respectively. When applied to a real experimental dataset, a 3D reconstruction that used peak positions computed by BraggNN yields 15% better results on average as compared to a reconstruction obtained using peak positions determined using conventional 2D pseudo-Voigt fitting. Recent advances in deep learning method implementations and special-purpose model inference accelerators allow BraggNN to deliver enormous performance improvements relative to the conventional method, running, for example, more than 200 times faster than a conventional method on a consumer-class GPU card with out-of-the-box software.

0
0
下载
预览

最新论文

X-ray diffraction based microscopy techniques such as High Energy Diffraction Microscopy rely on knowledge of the position of diffraction peaks with high precision. These positions are typically computed by fitting the observed intensities in area detector data to a theoretical peak shape such as pseudo-Voigt. As experiments become more complex and detector technologies evolve, the computational cost of such peak detection and shape fitting becomes the biggest hurdle to the rapid analysis required for real-time feedback during in-situ experiments. To this end, we propose BraggNN, a deep learning-based method that can determine peak positions much more rapidly than conventional pseudo-Voigt peak fitting. When applied to a test dataset, BraggNN gives errors of less than 0.29 and 0.57 pixels, relative to the conventional method, for 75% and 95% of the peaks, respectively. When applied to a real experimental dataset, a 3D reconstruction that used peak positions computed by BraggNN yields 15% better results on average as compared to a reconstruction obtained using peak positions determined using conventional 2D pseudo-Voigt fitting. Recent advances in deep learning method implementations and special-purpose model inference accelerators allow BraggNN to deliver enormous performance improvements relative to the conventional method, running, for example, more than 200 times faster than a conventional method on a consumer-class GPU card with out-of-the-box software.

0
0
下载
预览
Top