Fooling LiDAR Perception via Adversarial Trajectory Perturbation

1New York University, 2University of Chinese Academy of Sciences, 3Alibaba Group




Abstract

LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions. When autonomous vehicles are sending LiDAR point clouds to deep networks for perception and planning, could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation that is susceptible to wireless spoofing? We demonstrate such possibilities for the first time: instead of directly attacking point cloud coordinates which requires tampering with the raw LiDAR readings, only adversarial spoofing of a self-driving car's trajectory with small perturbations is enough to make safety-critical objects undetectable or detected with incorrect positions. Moreover, polynomial trajectory perturbation is developed to achieve a temporally-smooth and highly-imperceptible attack. Extensive experiments on 3D object detection have shown that such attacks not only lower the performance of the state-of-the-art detectors effectively, but also transfer to other detectors, raising a red flag for the community.





Motion Distortion in LiDAR

LiDAR measurements are obtained along with the rotation of its beams, so the measurements in a full sweep are captured at different timestamps, introducing motion distortion which jeopardizes the vehicle perception. Autonomous systems generally utilize LiDAR's location and orientation obtained from the localization system to correct distortion. Most LiDAR-based datasets [2, 7] have finished synchronization before release. Hence, the performance of current 3D perception algorithms in the distorted point cloud remains unexplored. In this work, we recover the point cloud before motion correction through linear pose interpolatation and rigid body transformation.



White Box Attack

Our white box model, PointRCNN [31], uses PointNet++ [27] as its backbone and includes two stages: stage-1 for proposal generation based on each foreground point, and stage-2 for proposal refinement in the canonical coordinate. Since PointRCNN uses raw point cloud as the input, the gradient can smoothly reach the point cloud, then arrive at vehicle trajectory. In this work, we individually attack the classification as well as regression branches in stage-1 and stage-2, with four attack targets in total.


Black Box Attack

PointPillar [14] proposes a fast point cloud encoder using pseudo-image representation. It divides point cloud into bins and uses PointNet [26] to extract the feature for each pillar. Due to the non-differentiable preprocesssing stage, the gradient cannot reach the point cloud. Peiyun et al. [11] proposed to augment PointPillar with the visibility map, achieving better precision. In this work, we use PointPillar++ to denote PointPillar with visibility map in [11]. We use perturbation learned from the white box PointRCNN to attack black box PointPillar++, in order to examine the transferability of our attack pipeline.

a) Original Detections. Green/red boxes denote groundtruth/predictions respectively.

b) Detections after attack. Green/red boxes denote groundtruth/predictions respectively..

a) Original Detections. Green/red boxes denote groundtruth/predictions respectively.

b) Detections after attack. Green/red boxes denote groundtruth/predictions respectively..



Polynomial Trajectory Perturbation

To achieve a temporally-smooth attack which is less perceptible, we implement a polynomial regression before the generation of perturbation and attack the polynomial coefficients instead of the trajectory itself. In this scenario, we only need to manipulate several key points to bend a polynomial-parameterized trajectory which can be easily achieved in reality, realizing a real-time and high-quality attack. Several qualitative examples of the polynomial trajectory perturbation are shown below.

Acknowledgement

The research is supported by NSF FW-HTF program under DUE-2026479. The authors gratefully acknowledge the useful comments and suggestions from Yong Xiao, Wenxiao Wang, Chenzhuang Du, Wang Zhao, Ziyuan Huang, Hang Zhao and Siheng Chen, and also thank Yan Wang, Shaoshuai Shi and Peiyun Wu for their helpful open-source code. The website template is by Daniel Lenton.


BibTeX