Occ4cast: LiDAR-based 4D Occupancy Completion and Forecasting  

1New York University 2University of Toronto 3Tsinghua University
*Equal contribution Corresponding author

Occupancy Comletion and Forecast (OCF) takes sequence of sparse LiDAR sweeps as input. The output is a sequence of densified and completed voxels in the future.


Scene completion and forecasting are two popular perception problems in research for mobile agents like autonomous vehicles. Existing approaches treat the two problems in isolation, resulting in a separate perception of the two aspects.

In this paper, we introduce a novel LiDAR perception task of Occupancy Completion and Forecasting (OCF) in the context of autonomous driving to unify these aspects into a cohesive framework. This task requires new algorithms to address three challenges altogether: (1) sparse-to-dense reconstruction, (2) partial-to-complete hallucination, and (3) 3D-to-4D prediction.

To enable supervision and evaluation, we curate a large-scale dataset termed OCFBench from public autonomous driving datasets. We analyze the performance of closely related existing baseline models and our own ones on our dataset. We envision that this research will inspire and call for further investigation in this evolving and crucial area of 4D perception.

All tasks take a sequence or a single LiDAR sweep as input. SSC aims to densify, complete, and semantically predict on the t=0 frame. Point/occupancy forecasting outputs a sparse and Lagrangian specification of the scene geometry's motion field. OCF combines scene completion and occupancy forecasting in a spatial-temporal way, outputting a dense and Eulerian motion field.

Experiment Results

Performance w.r.t. temporal ranges. The per-frame IoU is shown for each method when forecasting 10 future frames with 5/10 input frames.

Qualitative Result. The ture positive and false negative are shown in green and red respectively.


        title={LiDAR-based 4D Occupancy Completion and Forecasting}, 
        author={Xinhao Liu and Moonjun Gong and Qi Fang and Haoyu Xie and Yiming Li and Hang Zhao and Chen Feng},
        journal={arXiv preprint arXiv:2310.11239},


The work was supported by NSF 2238968 and 2236097 grants.