😎 RAP

Unleashing the Power of Data Synthesis
in Visual Localization

New York University
* Equal Contribution

We have further improved our results and paper and will update them. Our code and data will also be released soon.

Stay tuned!

We make camera localization more generalizable by addressing the data gap via 3DGS and learning gap via a two-branch joint learning with adversarial loss, achieving localization accuracy surpassing 1cm/0.3° in indoor scenarios, 20cm/0.5° in outdoor scenarios, and 10cm/0.2° in driving scenarios.

Abstract

Visual localization, which estimates a camera's pose within a known scene, is a fundamental capability for autonomous systems. While absolute pose regression (APR) methods have shown promise for efficient inference, they often struggle with generalization. Recent approaches attempt to address this through data augmentation with varied viewpoints, yet they overlook a critical factor: appearance diversity. In this work, we identify appearance variation as the key to robust localization. Specifically, we first lift real 2D images into 3D Gaussian Splats with varying appearance and deblurring ability, enabling the synthesis of diverse training data that varies not just in poses but also in environmental conditions such as lighting and weather. To fully unleash the potential of the appearance-diverse data, we build a two-branch joint training pipeline with an adversarial discriminator to bridge the syn-to-real gap. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods, reducing translation and rotation errors by 50% and 22% on indoor datasets, and 37% and 42% on outdoor datasets. Most notably, our method shows remarkable robustness in dynamic driving scenarios under varying weather conditions and in day-to-night scenarios, where previous APR methods fail.

Overview Image



3DGS

Overall illustration of appearance-varying 3DGS. The framework models varying appearances using 3D Gaussians enhanced with appearance colors. It initializes 3D Gaussians from SfM data, refines their appearance by learnable sampling and blending weights computed via an encoder and MLP, and renders images by a differentiable rasterizer with edge refinement to minimize the rendering loss.

Post Refinement

At test time, RAP's initial predicted pose will be used to render an RGB-D image via 3DGS. Together with MASt3R , we can obtain 2D-3D correspondences to perform RANSAC-PnP, resulting in a refined pose.

More Visualizations of Appearance-Varying 3DGS

1 2 3 4 5 6 7 8 9 10 11 12

RAP Results on MARS



BibTeX

@article{Li2024unleashing,
 title={Unleashing the Power of Data Synthesis},
 author={Sihang Li and Siqi Tan and Bowen Chang and Jing Zhang and Chen Feng and Yiming Li},
 year={2024},
 journal={arXiv preprint arXiv:2412.00138},
}

Acknowledgements

Yiming Li and Chen Feng are the corresponding authors. This work was supported in part through NSF grants 2238968, 2121391, and 2024882, and the NYU IT High Performance Computing resources, services, and staff expertise. Yiming Li is supported by NVIDIA Graduate Fellowship.