MICCAI 2019 Tuesday

MICCAI 2019 DAILY 8 Poster Presentation Raphael explains: “This work really started in the concept of what we see in computer vision. There are always examples showing a network trained to detect cats and dogs, for instance, and then an image of an aeroplane is provided, and the network of course gives an answer. We thought about this concept in the context of what it means for medical imaging and the broader community of MICCAI. This is where we started looking at this problem as an out-of-distribution detection problem. That’s where this work really fits in being able to find outliers in your data such that in critical systems, you can avoid the network doing something that it really shouldn’t be doing. That’s really the heart of what this work is about. How do you do that in a systematic and high-level way, but also in an end-to-end way in order to have a coherent method that’s applicable to a really wide number of cases?” Pablo goes on to say that in the context of MICCAI he has seen many Pablo Márquez Neila is a senior research assistant, and Raphael Sznitman is a professor in biomedical engineering at the ARTORG Center for Biomedical Engineering in the University of Bern. They speak to us ahead of their poster today. Artificial intelligence and deep learning are becoming more and more common for critical tasks like medical diagnosis. These tools are also becoming more autonomous, so images are being fed to neural networks and these neural networks are producing answers without a human in the loop to look at or assess images. Pablo tells us we know from previous research that neural networks are bad at predicting something they have not been trained for. This work proposes that all systems should have some kind of image validation so that artificial intelligence systems can safely process images. This method would learn the appearance of images in a training dataset and identify when an input image deviates from the training distribution and cannot be safely evaluated. Image data validation for medical systems "that neural networks are bad at predicting something they have not been trained for" Pablo Márquez Neila (left) with supervisor Raphael Sznitman

RkJQdWJsaXNoZXIy NTc3NzU=