MICCAI 2020 Daily - Wednesday

2 Paper Presentation 10 Yingying Zhu is an Assistant Professor in the Computer Science department at the University of Texas at Arlington. Her work on a new image translation model is the latest in a long line of papers that she has had accepted at MICCAI over the years. She speaks to us ahead of her oral presentation today. This work proposes an image translation model to convert pre-contrast CT images to post-contrast CT images and back again . Unlike existing image translation models, this focuses on the detailed structure of medical imaging, using a patch-based extraction and reconstruction system and a mixture Gaussian model to get more detailed translated images. Applied to calcified plaque detection in the blood vascular system , it works better than several state-of-the-art methods to preserve the fine detail and small structure in the medical imaging after the translation. Collecting labeled ground truth data for deep learning or medical imaging segmentation models can cost a lot of time and money. By using this model as a data augmentation strategy, instead of labeling images in different domains, you can convert a labeled image from Cross-Domain Image Translation by Shared Latent Gaussian Mixture Model Yingying Zhu pre-contrast CT to post-contrast CT, or an image label from post-contrast CT to pre-contrast CT. You then have two different domain images, both labeled, to increase the capacity of your trained models. “It is an unsupervised model for image translation,” Yingying explains. “Or self-supervised , because there is a self-supervised image translation loop so the image can be translated back. The method can also adapt to a weakly supervised environment or even a supervised one . By adding in more supervised information, we get a better model.” DAILY Wednesday

RkJQdWJsaXNoZXIy NTc3NzU=