MIDL Vision 2020

Oral Presentation 14 “Because we have limited data, we have to do lots of data augmentations ,” Richard explains. “For it to work as a quality control system, it needs to know what a noisy image looks like, what a movement artefact looks like, what a radio frequency spike looks like. As we’re training, we do k-space augmentations . The uncertainty is learnt from the data, so it can’t extrapolate very well to other kinds of images. If you train on pictures of cats, for example, and then give it a picture of a dog, it might not know what to do. It won’t make sense for a quality control system. We have to train with the specific artefacts that we want to decouple. So, we train with noise. We have a movement artefact model, RF spike, blurring. A whole bunch of augmentations.” Ensuring datasets are good quality for any further analysis, be it deep learning or machine learning or any kind of downstream analysis of predictions, is a key application of this work. Thinking about next steps, Richard says it can be hard to define image quality because what a human thinks constitutes good quality might be different to what an algorithmrequires. Hewants tocompare his uncertainty predictions against quality control raters to validate the model. He is in the process of collecting more data with quality control labels, including scans that have been rejected, the reason why they have been rejected and what the human-rater says about the image – if the patient has moved, for example – to do more extensive validation of the model. To learn more about Richard’s work [O171], you are invited to visit Oral Session #5 – Image Generation at 08:30-09:30 today and Poster Session #5 at 09:30-11:00. This is his teaser presentation for MIDL 2020: results-real-dat