Computer Vision News - February 2022

25 Adrian Dalca “You’re putting me on the spot here! this. Rather than giving you the best hyperparameter, we’re acknowledging there isn’t a best hyperparameter, it varies, but we’ve created a framework so that tuning the best hyperparameter is not much effort. We have an algorithm that says if you change this hyperparameter by this much, here’s the result, and you get it instantly. There’s no more retraining. A clinician can tune it in real time, and they’ll see the result in real time. Our goal is to eliminate the nuisance processes. I don’t want clinicians to worry about how the software works. I don’t want graduate students to worry about retraining. Just train one model and be done with it. Then at test time you can tune it a little bit, but it’s substantially less mundane work. That’s just one example of a task I feel we’ve swept under the rug. Welcome, Adrian! It must be so interesting to work at both Harvard Medical School and MIT. Can you tell us more? At MIT, we have strong technical students who create a very technical environment. We talk about the latest machine learning trends. At Harvard, we have some people who are very technical, but the work I do is more around healthcare and neuroscience – things the clinicians care about. My projects are heavily inspired by both institutions. I get the technical aspects more from MIT and the applied aspects more fromHarvard. If you look at any of my papers from the last couple of years you will see people from both places on there. That’s really nice – combining their talents. Yes, it’s a very nice set-up. I’m lucky because there are always good people on either side. If you only work on one, it’s very easy to miss the other, or not be realistic about it. What is your work about? I work on machine learning models for medical image analysis. I like to try to solve problems or think about problems that were not easily solvable before. Can you give us an example? One of the common problems we have is to make our algorithms work, we need to tune hyperparameters. Armies of students end up repeatedly tuning and retraining. It’s a horrible, lengthy process. That doesn’t show up in our papers. We never say, “We tuned this for six months!” We just give the best result. But the truth is it’s really challenging and frustrating: it makes it hard to deploy these algorithms because you give them to clinicians and then the clinicians have to tune them too. Which isn’t their job. Exactly. A new line of our projects is looking at how we can substantially alleviate

RkJQdWJsaXNoZXIy NTc3NzU=