DeepMapping

[CVPR2019 Oral] Self-supervised Point Cloud Map Estimation

View the Project on GitHub

DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds

Li Ding (University of Rochester), Chen Feng (NYU Tandon School of Engineering)

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019, Oral Presentation

Abstract Paper Code Results Acknowledgment

Teaser

2D Mapping Process Example 1 2D Mapping Process Example 2 2D Mapping Process Example 3

3D Mapping Process Example 1 3D Mapping Process Example 2 3D Mapping Process Example 3

Abstract

We propose DeepMapping, a novel registration framework using deep neural networks (DNNs) as auxiliary functions to align multiple point clouds from scratch to a globally consistent frame. We use DNNs to model the highly non-convex mapping process that traditionally involves hand-crafted data association, sensor pose initialization, and global refinement. Our key novelty is that properly defining unsupervised losses to “train” these DNNs through back-propagation is equivalent to solving the underlying registration problem, yet enables fewer dependencies on good initialization as required by ICP. Our framework contains two DNNs: a localization network that estimates the poses for input point clouds, and a map network that models the scene structure by estimating the occupancy status of global coordinates. This allows us to convert the registration problem to a binary occupancy classification, which can be solved efficiently using gradient-based optimization. We further show that DeepMapping can be readily extended to address the problem of Lidar SLAM by imposing geometric constraints between consecutive point clouds. Experiments are conducted on both simulated and real datasets. Qualitative and quantitative comparisons demonstrate that DeepMapping often enables more robust and accurate global registration of multiple point clouds than existing techniques.

Paper (arXiv)

To cite our paper:

@article{ding2018deepmapping,
  title={DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds},
  author={Ding, Li and Feng, Chen},
  journal={arXiv preprint arXiv:1811.11397},
  year={2018}
}

Code (GitHub)

The code is copyrighted by the authors. Permission to copy and use 
 this software for noncommercial use is hereby granted provided: (a)
 this notice is retained in all copies, (2) the publication describing
 the method (indicated below) is clearly cited, and (3) the
 distribution from which the code was obtained is clearly cited. For
 all other uses, please contact the authors.
 
 The software code is provided "as is" with ABSOLUTELY NO WARRANTY
 expressed or implied. Use at your own risk.

This code provides an implementation of the method described in the
following publication: 

Li Ding and Chen Feng, "DeepMapping: Unsupervised Map Estimation From 
 Multiple Point Clouds," The IEEE Conference on Computer Vision and 
 Pattern Recognition (CVPR), June, 2019.

overview

Results

2D Mapping (Simulated Data)

2D Mapping Results 1 2D Mapping Results 2

3D Mapping (Real Data)

3D Mapping Results 1 3D Mapping Results 2

Acknowledgment

This work was partially done while the authors were with MERL, and was supported in part by NYU Tandon School of Engineering and MERL. Chen Feng is the corresponding author. We gratefully acknowledge the helpful comments and suggestions from Yuichi Taguchi, Dong Tian, Weiyang Liu, and Alan Sullivan.