😎RAP

Unleashing the Power of Data Synthesis
in Visual Localization

New York University
* Equal Contribution

Snapshot

TLDR: We make camera localization more generalizable by addressing the data gap via 3DGS and learning gap via a two-branch joint learning with adversarial loss, achieving localization accuracy surpassing 1cm/0.3° in indoor scenarios, 20cm/0.5° in outdoor scenarios, and 10cm/0.2° in driving scenarios.


Abstract

Visual localization, which estimates a camera's pose within a known scene, is a long-standing challenge in vision and robotics. Recent end-to-end methods that directly regress camera poses from query images have gained attention for fast inference. However, existing methods often struggle to generalize to unseen views. In this work, we aim to unleash the power of data synthesis to promote the generalizability of pose regression. Specifically, we lift real 2D images into 3D Gaussian Splats with varying appearance and deblurring abilities, which are then used as a data engine to synthesize more posed images. To fully leverage the synthetic data, we build a two-branch joint training pipeline, with an adversarial discriminator to bridge the syn-to-real gap. Experiments on established benchmarks show that our method outperforms state-of-the-art end-to-end approaches, reducing translation and rotation errors by 50% and 21.6% on indoor datasets, and 35.56% and 38.7% on outdoor datasets. We also validate the effectiveness of our method in dynamic driving scenarios under varying weather conditions. Notably, as data synthesis scales up, the ability to interpolate and extrapolate training data for localizing unseen views emerges.

Overview Image



3DGS

Overall illustration of appearance-varying 3DGS. The framework models varying appearances using 3D Gaussians enhanced with appearance colors. It initializes 3D Gaussians from SfM data, refines their appearance by learnable sampling and blending weights computed via an encoder and MLP, and renders images by a differentiable rasterizer with edge refinement to minimize the rendering loss.

Post Refinement

At test time, RAP's initial predicted pose will be used to render an RGB-D image via 3DGS. Together with MASt3R , we can obtain 2D-3D correspondences to perform RANSAC-PnP, resulting in a refined pose.

More Visualizations of Appearance-Varying 3DGS

1 2 3 4 5 6 7 8 9 10 11 12

RAP Results on MARS



BibTeX

@article{Li2024unleashing,
 title={Unleashing the Power of Data Synthesis},
 author={Sihang Li and Siqi Tan and Bowen Chang and Jing Zhang and Chen Feng and Yiming Li},
 year={2024},
 journal={arXiv preprint arXiv:2412.00138},
}

Acknowledgements

Yiming Li and Chen Feng are the corresponding authors. This work was supported in part through NSF grants 2238968, 2121391, and 2024882, and the NYU IT High Performance Computing resources, services, and staff expertise. Yiming Li is supported by NVIDIA Graduate Fellowship.