CVPR Daily - Tuesday

DAILY T u e s d a y Workshop 24 Learning 3D Generative Models Workshop As computer vision increasingly moves from 2D to 3D data, the importance of methods to learn from 3D data and to generate 3D data for a variety of tasks is rising. This was the focus of the “Learning 3D Generative Models” workshop held on June 14th. This workshop was organized by a group of 13 researchers led by Daniel Ritchie , assistant professor at Brown University . Over 200 participants attended the workshop during the day. Eight prominent researchers gave invited talks on recent developments in data-driven approaches for 3D generation of objects and scenes. In addition, eight groups of researchers presented posters on recent work within the themes of the workshop. Daniel Aliaga , associate professor at Purdue University , kicked the workshop off with a talk on urban scene generation from satellite imagery (see figure below). 3D buildings generated from aerial photographs by Angel Xuan Chang Another talk by Georgia Gkioxari , research scientist at Facebook AI research stressed the importance of data, tools, and benchmarks for 3D. Georgia introduced PyTorch3D , a new library designed to accelerate research with 3D data, and allow everyone to do efficient 3D deep learning, with a modular and customizable differentiable renderer. In the following talk, Jitendra Malik motivated learning approaches to traditional 3D vision and focused on the need to move beyond strongly supervised 3D deep learning . An important question from the talk was whether we can learn to generate 3D directly from 2D images or 2D videos? Jitendra presented work with Angjoo Kanazawa t hat generated 3D mesh reconstructions of birds directly from images, without using 3D supervision (see figure below).

RkJQdWJsaXNoZXIy NTc3NzU=