A. Lopez-Rodriguez, B. Busam, K. Mikolajczyk
Project to Adapt: Domain Adaptation for Depth Completion from Noisy and Sparse Sensor Data Asian Conference on Computer Vision (ACCV), Kyoto, Japan, November 2020 [oral]. (bib) |
||
Depth completion aims to predict a dense depth map from a sparse depth input. The acquisition of dense ground truth annotations for depth completion settings can be difficult and, at the same time, a significant domain gap between real LiDAR? measurements and synthetic data has prevented from successful training of models in virtual settings. We propose a domain adaptation approach for sparse-to-dense depth completion that is trained from synthetic data, without annotations in the real domain or additional sensors. Our approach simulates the real sensor noise in an RGB + LiDAR? setup, and consists of three modules: simulating the real LiDAR? input in the synthetic domain via projections, filtering the real noisy LiDAR? for supervision and adapting the synthetic RGB image using a CycleGAN? approach. We extensively evaluate these modules against the state-of-the-art in the KITTI depth completion benchmark, showing significant improvements. | ||
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each authors copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. |