ECCV 2020 Daily - Monday

2 Oral Presentation 4 DAILY M o n d a y Pratul Srinivasan is an almost- graduatedPhDstudent atUCBerkeley andhas just startedworkas a research scientist at Google Research. This is his first time at ECCV, although he has presented papers before at CVPR and ICCV. He speaks to us ahead of his oral session today, where he will present his paper on a 3D representation for the purpose of view synthesis. This work is called NeRF , or Neural RadianceFields ,andproposesestimating a 3D representation of a scene or an object to re-render new viewpoints that have never been seen before. Doing this with very high visual fidelity to produce photo-realistic images. “Up to now, good results in estimating 3D models for synthesising photo- realistic novel views from objects and scenes have come from using discrete volumetric grid-like structure s,” Pratul tells us. “What people do, and do really well, is take a bunch of pictures and have a deep network that takes those pictures and predicts some sort of discrete voxel grid. Then you can do alpha compositing along any ray into a new camera and render a picture that way.” That approach works really well, but the main limitation is as you want to make higher and higher resolution pictures and represent more and more complex scenes, it takes almost intractable amounts of storage to scale up this voxel grid representation. One of the big differences between NeRF and what has been previously done is that instead of instantiating one very large voxel grid, you have a network that regresses to a continuous representation of that volume. At any continuous position in the space, you can query the network saying, what is the actual color of light emitted by a point here? In addition to that, this method uses view dependence. At any 3D position in NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis “We’re able to represent the same complex scene with only five megabytes!”

RkJQdWJsaXNoZXIy NTc3NzU=