【泡泡汇总】CVPR2019 SLAM Paperlist

2019 年 6 月 12 日 泡泡机器人SLAM
【泡泡汇总】CVPR2019 SLAM Paperlist
 

CVPR2019 六月在美国召开,我们对SLAM相关的会议论文进行了整理分类。

主要分为以下几类:

    1.匹配

    2.匹配-深度学习

    3.三维重建

    4.三维重建-深度学习

    5.定位

    6.定位-深度学习

    7.跟踪

    8.跟踪-深度学习

    9.深度估计

    10.深度估计-深度学习

    11.标定-深度学习

    12.目标检测

    13.目标检测-深度学习

    14.自动驾驶

    15.其他


各类的论文如下:

匹配:

  1. SDRSAC: Semidefinite-Based Randomized Approach for Robust  Point Cloud Registration without Correspondences

  2. NM-Net: Mining Reliable Neighbors for Robust Feature  Correspondences

  3. The Perfect Match: 3D Point Cloud Matching with Smoothed  Densities


匹配-深度学习:

  1. GA-Net: Guided Aggregation Net for End-to-end Stereo Matching

  2. Guided Stereo Matching

  3. Multi-Level Context Ultra-Aggregation for Stereo Matching

  4. PointNetLK: Robust & Efficient Point Cloud Registration  using PointNet


三维重建:

  1. Coordinate-Free Carlsson-Weinshall Duality and Relative  Multi-View Geometry

  2. PlaneRCNN: 3D Plane Detection and Reconstruction from a Single  View

  3. Single-Image Piece-wise Planar 3D Reconstruction via  Associative Embedding

  4. GPSfM: Global Projective SFM Using Algebraic Constraints\\ on  Multi-View Fundamental Matrices

  5. Privacy Preserving Image-based Localization

  6. Visual Localization by Learning Objects-of-Interest Dense  Match Regression

  7. Robust Point Cloud Reconstruction of Large-Scale Outdoor  Scenes

  8. SceneCode: Monocular Dense Semantic Reconstruction using  Learned Encoded Scene Representations


三维重建-深度学习:

  1. Revealing Scenes by Inverting Structure from Motion  Reconstructions

  2. Deep Reinforcement Learning of Volume-guided Progressive View  Inpainting for 3D Point Scene Completion from a Single Depth Image

  3. What Do Single-view 3D Reconstruction Networks Learn?

  4. Learning View Priors for Single-view 3D Reconstruction


定位:

  1. PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation

  2. Hybrid Scene Compression for Visual Localization

  3. The Alignment of the Spheres: Globally-Optimal Spherical  Mixture Alignment for Camera Pose Estimation


定位-深度学习:

  1. Normalized Object Coordinate Space for Category-Level 6D  Object Pose and Size Estimation

  2. Extreme Relative Pose Estimation for RGB-D Scans via Scene  Completion

  3. Understanding the Limitations of CNN-based Absolute Camera  Pose Regression

  4. DeepLiDAR: Deep Surface Normal Guided Depth Prediction for  Outdoor Scene from Sparse LiDAR Data and Single Color Image

  5. DenseFusion: 6D Object Pose Estimation by Iterative Dense  Fusion

  6. Segmentation-driven 6D Object Pose Estimation

  7. PointFlowNet: Learning Representations for Rigid Motion  Estimation from Point Clouds

  8. From Coarse to Fine: Robust Hierarchical Localization at Large  Scale

 

跟踪:

  1. VITAMIN-E: VIsual Tracking And MappINg with Extremely Dense  Feature Points

  2. Motion estimation of non-holonomic ground vehicles from a  single feature correspondence measured over n views


跟踪-深度学习:

  1. Unsupervised Event-based Learning of Optical Flow, Depth, and  Egomotion

  2. SPLFlowNet: Sparse Permutohedral Lattice FlowNet for Scene  Flow Estimation on Large-scale Point Clouds

  3. SIGNet: Semantic Instance Aided Unsupervised 3D Geometry  Perception


深度估计:

  1. Recurrent MVSNet for High-resolution Multi-view Stereo Depth  Inference

  2. Learning Single-Image Depth from Videos using Quality  Assessment Networks

  3. Depth from a polarisation + RGB stereo pair

  4. Monocular Depth Estimation Using Relative Depth Maps

  5. Geometry-Aware Symmetric Domain Adaptation for Monocular Depth  Estimation

  6. CAM-Convs: Camera-Aware Multi-Scale Convolutions for  Single-View Depth Prediction


深度估计-深度学习:

  1. Recurrent Neural Network for (Un-)supervised Learning of  Monocular Video Visual Odometry and Depth

  2. Connecting the Dots: Learning Representations for Active  Monocular Depth Estimation

  3. Learning Non-Volumetric Depth Fusion using Successive  Reprojections

  4. Learning monocular depth estimation infusing traditional  stereo knowledge


标定-深度学习:

  1. Deep Single Image Camera Calibration with Radial Distortion


目标检测:

  1. PointRCNN: 3D Object Proposal Generation and Detection from  Point Cloud


目标检测-深度学习:

  1. Deep Relational Reasoning Network for Monocular 3D Object  Detection

  2. ROI-10D: Monocular Lifting of 2D Detection to 6D Pose and  Metric Shape


自动驾驶:

  1. DrivingStereo: A Large-Scale Dataset for Stereo Matching in  Autonomous Driving Scenarios

  2. GS3D: An Efficient 3D Object Detection Framework for  Autonomous Driving

  3. ApolloCar3D: A Large 3D Car Instance Understanding Benchmark  for Autonomous Driving

  4. Stereo R-CNN based 3D Object Detection for Autonomous Driving

  5. Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in  3D Object Detection for Autonomous Driving

  6. Rules of the Road: Predicting Driving Behavior with a  Convolutional Model of Semantic Interactions


其他:

  1. BAD SLAM: Bundle  Adjusted Direct RGB-D SLAM

  2. Modeling Local Geometric Structure of 3D Point  Clouds using  Geo-CNN

  3. Noise-Aware Unsupervised Deep Lidar-Stereo Fusion

  4. 3D Motion Decomposition for RGBD Future Dynamic Scene  Synthesis

  5. RGBD Based Dimensional Decomposition Residual Network for 3D Semantic Scene Completion

  6. D2-Net: A Trainable CNN for Joint Description and Detection of  Local Features

  7. LO-Net: Deep Real-time Lidar Odometry

  8. Octree guided CNN with Spherical Kernels for 3D Point Clouds

  9. DeepMapping:  Unsupervised Map Estimation From Multiple Point Clouds

  10. FlowNet3D:  Learning Scene Flow in 3D Point Clouds


如有任何遗漏或错误,欢迎大家批评指正~

欢迎来到泡泡论坛,这里有大牛为你解答关于SLAM的任何疑惑。

有想问的问题,或者想刷帖回答问题,泡泡论坛欢迎你!

泡泡网站:www.paopaorobot.org

泡泡论坛:http://paopaorobot.org/bbs/


商业合作及转载请联系liufuqiang_robot@hotmail.com

登录查看更多
14

相关内容

在计算机视觉中, 三维重建是指根据单视图或者多视图的图像重建三维信息的过程. 由于单视频的信息不完全,因此三维重建需要利用经验知识. 而多视图的三维重建(类似人的双目定位)相对比较容易, 其方法是先对摄像机进行标定, 即计算出摄像机的图象坐标系与世界坐标系的关系.然后利用多个二维图象中的信息重建出三维信息。 物体三维重建是计算机辅助几何设计(CAGD)、计算机图形学(CG)、计算机动画、计算机视觉、医学图像处理、科学计算和虚拟现实、数字媒体创作等领域的共性科学问题和核心技术。在计算机内生成物体三维表示主要有两类方法。一类是使用几何建模软件通过人机交互生成人为控制下的物体三维几何模型,另一类是通过一定的手段获取真实物体的几何形状。前者实现技术已经十分成熟,现有若干软件支持,比如:3DMAX、Maya、AutoCAD、UG等等,它们一般使用具有数学表达式的曲线曲面表示几何形状。后者一般称为三维重建过程,三维重建是指利用二维投影恢复物体三维信息(形状等)的数学过程和计算机技术,包括数据获取、预处理、点云拼接和特征分析等步骤。

In this paper, we propose a novel dense surfel mapping system that scales well in different environments with only CPU computation. Using a sparse SLAM system to estimate camera poses, the proposed mapping system can fuse intensity images and depth images into a globally consistent model. The system is carefully designed so that it can build from room-scale environments to urban-scale environments using depth images from RGB-D cameras, stereo cameras or even a monocular camera. First, superpixels extracted from both intensity and depth images are used to model surfels in the system. superpixel-based surfels make our method both run-time efficient and memory efficient. Second, surfels are further organized according to the pose graph of the SLAM system to achieve $O(1)$ fusion time regardless of the scale of reconstructed models. Third, a fast map deformation using the optimized pose graph enables the map to achieve global consistency in real-time. The proposed surfel mapping system is compared with other state-of-the-art methods on synthetic datasets. The performances of urban-scale and room-scale reconstruction are demonstrated using the KITTI dataset and autonomous aggressive flights, respectively. The code is available for the benefit of the community.

0
5
下载
预览

In this paper, we proposed a new deep learning based dense monocular SLAM method. Compared to existing methods, the proposed framework constructs a dense 3D model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner.Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

0
9
下载
预览

Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work marries the two and proposes a method for representing generic objects as quadrics which allows object detections to be seamlessly integrated in a SLAM framework. For scene coverage, additional dominant planar structures are modeled as infinite planes. Experiments show that the proposed points-planes-quadrics representation can easily incorporate Manhattan and object affordance constraints, greatly improving camera localization and leading to semantically meaningful maps. The performance of our SLAM system is demonstrated in https://youtu.be/dR-rB9keF8M .

0
4
下载
预览
小贴士
相关资讯
跟踪SLAM前沿动态系列之ICCV2019
泡泡机器人SLAM
6+阅读 · 2019年11月23日
【泡泡图灵智库】Visual SLAM: 为什么要用BA(ICRA)
泡泡机器人SLAM
46+阅读 · 2019年7月11日
CVPR 2019 | 重磅!34篇 CVPR2019 论文实现代码
AI研习社
11+阅读 · 2019年6月21日
【泡泡汇总】最强 SLAM Datasets 合辑
泡泡机器人SLAM
11+阅读 · 2019年5月27日
CVPR2019| 05-20更新17篇点云相关论文及代码合集
极市平台
19+阅读 · 2019年5月20日
相关VIP内容
专知会员服务
26+阅读 · 2020年6月17日
专知会员服务
43+阅读 · 2020年3月19日
专知会员服务
99+阅读 · 2020年3月12日
专知会员服务
43+阅读 · 2020年2月26日
专知会员服务
27+阅读 · 2020年1月10日
专知会员服务
47+阅读 · 2019年12月13日
【电子书】让 PM 全面理解深度学习 65页PDF免费下载
专知会员服务
12+阅读 · 2019年10月30日
相关论文
Real-time Scalable Dense Surfel Mapping
Kaixuan Wang,Fei Gao,Shaojie Shen
5+阅读 · 2019年9月10日
Sparse2Dense: From direct sparse odometry to dense 3D reconstruction
Jiexiong Tang,John Folkesson,Patric Jensfelt
9+阅读 · 2019年3月21日
MID-Fusion: Octree-based Object-Level Multi-Instance Dynamic SLAM
Binbin Xu,Wenbin Li,Dimos Tzoumanikas,Michael Bloesch,Andrew Davison,Stefan Leutenegger
8+阅读 · 2018年12月20日
Shaoshuai Shi,Xiaogang Wang,Hongsheng Li
6+阅读 · 2018年12月11日
Monocular Object and Plane SLAM in Structured Environments
Shichao Yang,Sebastian Scherer
10+阅读 · 2018年9月10日
Structure Aware SLAM using Quadrics and Planes
Mehdi Hosseinzadeh,Yasir Latif,Trung Pham,Niko Suenderhauf,Ian Reid
4+阅读 · 2018年8月13日
Learning to Generate and Reconstruct 3D Meshes with only 2D Supervision
Paul Henderson,Vittorio Ferrari
4+阅读 · 2018年7月24日
Georgios Georgakis,Srikrishna Karanam,Ziyan Wu,Jan Ernst,Jana Kosecka
7+阅读 · 2018年5月9日
Yangyan Li,Rui Bu,Mingchao Sun,Baoquan Chen
8+阅读 · 2018年1月25日
Runmin Cong,Jianjun Lei,Huazhu Fu,Weisi Lin,Qingming Huang,Xiaochun Cao,Chunping Hou
4+阅读 · 2017年11月4日
Top