Computer Vision News - February 2022

26 Exclusive interview “A lot of the time I would love to try out an idea with just one experiment and then put it online as like a mini paper and that’s it. We don’t have a mechanism for that in the community yet…” You don’t have to retrain it every time and pretend it’s a new problem and restart the project. How can we generalize from these things in a way that is completely robust to image data type, modality, or even task? Have you had any eureka moments recently? The project that had the most impact on me was VoxelMorph with Guha Balakrishnan, Amy Zhao, Mert Sabuncu and John Guttag. It was a really weird moment when Guha and I came up with it. Usually, you come up with an idea, you try it out, that takes a few days, it doesn’t work, you iterate. But with Guha, we had this idea, we tried it, and in two days, he came back to me and said, “ Okay, it works. Now what? ” [ we both laugh ] I was like, “ No, no, no, it can’t work. It’s the first attempt .” He was like, “ No, no, it works. It just works! Look, here’s the result .” Yeah, we were really happy, but at the same time, I felt he wasn’t getting the real student experience! What I’m really excited about with VoxelMorph is not just it’s this registration algorithm Are there any other tasks you’re working on at the moment? We have a series of projects on how we can get more and more general networks within medical imaging. So many of our models are enormously narrow. Segmenting the hippocampus in the brain, for example, is a popular task, but it’s narrow. You’re only segmenting the hippocampus, and you’re only segmenting it on a particular type of MRI, and it’s only for the brain. It would be nice to have a system that’s independent of input modality or task. You can tell it what you want to segment and it just segments.

RkJQdWJsaXNoZXIy NTc3NzU=