Computer Vision News - October 2018

Tanya Nair has just finished her Masters of Engineering at McGill University under the supervision of Tal Arbel . She spoke to us ahead of her oral presentation and poster session at MICCAI on Exploring Uncertainty Measures in Deep Networks for Multiple Sclerosis Lesion Detection and Segmentation . Tanya says, as we all know, deep networks are ubiquitous. They’re widespread in their use. We’ve seen a huge application of these methods in MICCAI and other medical imaging domains. However, the application into the medical imaging context is slower than that of computer vision where these techniques typically come from. One of the reasons for that is that when a network makes a prediction, it doesn’t say what kind of confidence it really has in that prediction. What’s missing is a way of interpreting the predictions made by these networks. There’s a lot of work being done recently in interpretability in machine learning, and really recent work in uncertainty in deep networks from computer vision. Tanya’s work is about exploring a specific mechanism to evaluate uncertainty in the predictions made by deep nets and trying to extend and evaluate the validity of using these measures. Tanya explains further: “ Typical applications of uncertainty in deep networks use a method called dropout, which is a popular regularisation technique used in the training of the model. A recent method from computer vision says that we can apply dropout at test time and get multiple predictions. Statistics of all of these samples can be used to form an approximation to an uncertainty. This has been used in a few different approaches .” Tanya’s work is trying to answer the question, are these uncertainties related to the classification output? Are they related to the prediction output if we’re making predictions about pathology and segmentation and detection? Previous work has only used these uncertainties to improve the performance of the network, but because we want to use uncertainty information in a clinical workflow to support clinical analysis, we need to make this connection. The network is making a prediction, and we have these methods to evaluate uncertainty, but are they meaningful ? Tanya tells us that in order to do this, she poses a simple hypothesis. Uncertain predictions should also be incorrect predictions. If we have a network that can classify its predictions into one of 34 Oral Presentation With Tanya Nair Monday

RkJQdWJsaXNoZXIy NTc3NzU=