Previous Page  14 / 14
Show Menu
Previous Page 14 / 14
Page Background

This project creates

adaptive 3D sensors based on information


. Instead of treating 3D sensors as a black-box and asking

what to do with the 3D data we get or where to move the sensor to learn

more about the scene, we are asking: if I already know something about

the environment, and I have a specific task in mind, how should the

sensor operate?


CSIL / MIT team

focused on structured light scanners, that

emit/project light patterns over time and acquire images. The question is:

which pattern should the sensor project next? For example, in order to

reconstruct a specific object that moves, you may not need to illuminate

the whole scene. If you just need to localize in a known environment or

help a robot avoid obstacles, you may need radically different patterns

than if you want full scene reconstruction for augmented reality.

This research paper explores such options based on information

maximization and sensor planning. Taking these concepts from decision

theory and robotics, it is shown that with the right probabilistic model,

they can be used inside the sensor as well.

The team chose a probabilistic model that incorporates the scene and

sensor pose uncertainty, and yet allows to approximate the information

gain between the acquired images and subsets of variables in the scene

such as the sensor pose or aspects of the geometry. The model allows to

do so in a highly-parallel way, which, it is hoped, will make it useful for

real systems.

Adaptive 3D scanners, and the concepts shown are expected to result in

more efficient and accurate sensors that are better suited to the kind of

multiple roles we expects robots and mobile devices will play in the




CVPR Daily: Thursday

Guy Rosman - CSAIL/MIT