DAILY Wednesday Workshop Preview and Challenge 18 A question that comes up repeatedly in this workshop and other work on 3D data is what representation for 3D objects and scenes is best for applying deep learning approaches. Unlike the 2D domain where raster images with pixels are the de facto standard, the jury is still out on whether voxels, point clouds, triangular meshes, or other 3D representations are best for 3D scene understanding tasks (see figure below for visualization of some representations). Figure from Andreas Geiger’s talk (Occupancy Networks, CVPR 2019) showing several 3D representations. From left: voxels, point clouds, triangular meshes, and implicit neural functions. ScanNet Indoor Scene Understanding Challenge by Angel Chang At this year’s CVPR, we are holding on Friday the 2nd iteration of the ScanNet i ndoor scene understanding challenge. The goal of this challenge is to push the frontiers of 3D scene understanding from realistic 3D interiors. The workshop is co-organized by Angela Dai, Angel Chang, Manolis Savva, and Matthias Niessner . Three leading experts on 3D scene understanding will give invited talks on a variety of topics, winning teams will present methods at the top of the challenge leaderboard, and speakers from participating teams in last year’s challenge will revisit how their methods have been extended by newer work.