6 DAILY ICCV Thursday The other method is to use focus stacking, which is the most common go-to approach for photographers where they stack the focus across the depth range. So you capture one photo here, capture another photo here, another photo here, and put this stack together computationally. Afterwards, you capture the entire stack so you can computationally fuse the focuses across the sensor together. Yeah, so these two most common approaches, they have their own drawback. Yingsi’s method maintains a large aperture, so you don't have to use a long exposure. And also it will not have defocus blur when you're in an extreme depth range. You don't need post processing. You don't rely on computational post processing to produce the focus result. There are two parts that go into this work, Yingsi explains: “It's a work that combines hardware and software. So the two key innovations that enable this work. One is the optics, which is the camera itself. The optics enables us to have spatial control of focus. And then the algorithm tells us how do we control, what kind of control do we put into the camera to enable this. So for example, if I want the focal surface to conform to scene geometry, I need the depth map of the scene. And the optics, which is the hardware of this work, enables us to, as long as we have this depth map, to perform all in focus imaging. The algorithm is what gives us the depth map.” This didn’t go without challenges. The first challenge came when Yingsi was building the first iteration of the prototype, almost two years ago. It was very different from this one. It uses a totally different set of lenses, and a different sensor, a machine vision sensor; and it used 50millimetre lenses for the relay. She played around with that setup for a few months. But then the 50millimeter lenses that she was using turned out producing too much chromatic aberration in the prototype. The other challenge is that the machine vision sensor allows Best Paper Hon. Mention
RkJQdWJsaXNoZXIy NTc3NzU=