【顶会100秒】【灾难管家】用于模拟灾难场景的大型虚拟数据集
标题:DISC: A Large-scale Virtual Dataset for Simulating Disaster Scenarios
作者:Hae-Gon Jeon, Sunghoon Im, Byeong-Uk Lee, Dong-Geol Choi, Martial Hebert and In So Kweon
来源:机器人领域顶级会议-The 2019 IEEE International Conference on Intelligent Robots and Systems (IROS 2019)
编译:曾莹莹,李灏城,李壮,刘博艺
01
摘要
在本文中,我们提出了第一个大规模的综合数据集,用于灾难场景下的视觉感知,并使用参考基线分析了用于多种计算机视觉任务的最新方法。我们在现实虚拟世界中的十五个不同位置模拟了灾难前后的场景,例如火灾和建筑物倒塌。该数据集包含超过300K的高分辨率立体图像对,所有这些图像对都带有地面真实数据进行注释,以进行语义分割,深度,光流,表面法线估计和相机姿态估计。为了创建逼真的灾难现场,我们使用基于物理的图形工具通过3D模型手动增强了效果。我们使用我们的数据集来训练生成最先进的方法,并评估这些方法在多大程度上能够识别灾难情况并在虚拟场景以及真实世界的图像上产生可靠的结果。然后,将从每个任务获得的结果用作拟议的视觉里程计网络的输入,以生成火灾中建筑物的3D地图。最后,我们讨论了未来研究的挑战。
02
核心内容
03
主要实验情况:
04
视频
https://www.zhihu.com/video/1216780061451538432Abstract
In this paper, we present the first large-scale synthetic dataset for visual perception in disaster scenarios, and analyze state-of-the-art methods for multiple computer vision tasks with reference baselines. We simulated before and after disaster scenarios such as fire and building collapse for fifteen different locations in realistic virtual worlds. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for semantic segmentation, depth, optical flow, surface normal estimation and camera pose estimation. To create realistic disaster scenes, we manually augmented the effects with 3D models using physical-based graphics tools. We use our dataset to train state-of-the-art methods and evaluate how well these methods can recognize the disaster situations and produce reliable results on virtual scenes as well as real-world images. The results obtained from each task are then used as inputs to the proposed visual odometry network for generating 3D maps of buildings on fire. Finally, we discuss challenges for future research.