MIDL Vision 2021

In the context of chest X-rays this has been done before, but with MRI scans it has proved more difficult. It all comes down to the ability to acquire large training datasets . In order to create the triage tool, you need a training dataset with tens of thousands of labeled images. A radiologist would have to go through each one, which with 2D X-rays is already very hard, but MRIs are 3D images with multiple sequences, and each image itself might be a stack of 50 images, so it is even more time consuming. The automated tool David and his team have designed can read a radiology report and decide, without looking at the image, whether it is normal or abnormal , and then go back to the image and make a label. It can quickly scan through 10 years’ worth of images and reports from a hospital – this may be 50,000- 100,000 scans – and where it would take a human annotator many years to process them all, this tool can do it in half an hour . “ In that part of it there is no imaging involved, ” David reveals. “ It’s a text model that learns to read a report and decide if it’s normal or abnormal. We’ve got a team of radiologists who taught the text model how to read radiology reports , so that took a little time, but once it’s done, you can quickly generate huge training datasets. Our training dataset was 54,000 7 David A. Wood VISION MIDL

RkJQdWJsaXNoZXIy NTc3NzU=