Computer Vision News - October 2020

Research 4 Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems Every month, Computer Vision News selects a research paper to review. This month it’s the turn of “Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems”. We are indebted to the authors (Xingjun Ma, Yuhao Niu, Lin Gu, Yisen Wang, Yitian Zhao, James Bailey, Feng Lu), for allowing us to use their images to illustrate this review. You can find their paper at this link. Introduction It appears that research in the field of AI and computer vision right now is mostly aimed at improving the performance of deep neural networks (DNNs) on some specific challenges. If on one hand this looks like an extremely exciting direction for science and every day promising results create gripping visions of new automatic tools, on the other hand we should also be aware that there are safety issues that we need to thoroughly address before these systems can actually be brought to the market. One of the best tools to this end are experiments conducted on these very DNNs through adversarial attacks, “slightly perturbed input instances that can perfectly fool DNNs” . It is important to highlight that studies on how these new Deep Learning systems can be tricked/fooled are right now one of the main priorities in the field. We certainly wouldn’t want to let highly trickable systems be used for something as delicate as medical diagnosis or autonomous driving. by Marica Muffoletto Figure 1: examples of adversarial attacks among different datasets

RkJQdWJsaXNoZXIy NTc3NzU=