Computer Vision News - August 2022

4 AI Research This month we are reviewing the paper entitled: See Eye to Eye: A Lidar-Agnostic 3D Detection Framework for Unsupervised Multi-Target Domain Adaptation. We deeply thank all authors ( Darren Tsai, Julie Stephany Berrio, Mao Shan, Stewart Worrall, Eduardo Nebot ) for allowing us to use their images. We start with a question. What is a LIDAR? This word stands for laser imaging, detection, and ranging and it’s used to describe sensors which determine ranges and distances from objects, using light properties. LIDAR sensors in combination with 3D detection techniques can be applied to different fields, such as the one this paper focuses on- autonomous vehicles. The performance of state-of-the-art 3D detectors across different lidars widely changes. And therefore, the authors of this paper are looking into Unsupervised Domain Adaptation (UDA) techniques which can bridge these performance gaps between lidars. According to the discussed state-of-the-art, Yang et al. beat previous methods, using a self-training approach which generates high-quality pseudo-labels. Unfortunately, this still suffers from a big limitation: it doesn’t work on lidars with adjustable scan pattern. Hence, Darren Tsai and colleagues propose a UDA method called SEE that works on both fixed and adjustable scan pattern lidars without requiring fine-tuning a model for each new scan pattern . This is based on a scan pattern agnostic representation of objects to enable a trained 3D detector to perform on any lidar pattern. By Marica Muffoletto (twitter) SEE EYE TO EYE Figure 1: Overview of proposed method- SEE

RkJQdWJsaXNoZXIy NTc3NzU=