Computer Vision News - February 2023

7 TAME: Attention Mechanism Based... “ For instance, we trained a classifier to recognize the concept of sunglasses, but by looking at the explanation, the classifier had actually learned to understand the concept of a person wearing sunglasses, not the sunglasses themselves, ” Vasileios reveals. “ These differences in what we expect the classifier to learn and what it has learned could be due to biases in the training data, demonstrating the practical importance of this approach. ” Nikolaos adds: “ Another thing is that we made an extensive ablation study to ablate all the different parts of the method, which convinces the reader and helps them to understand the value of each small part of the method. I imagine this was also a key factor for the judges! ” is a fantastic achievement and no mean feat. What do they think made their paper stand out to the judges? “ This is something you can only guess, right? ” Vasileios ponders. “ We presented a well-performing method that generates good explanations , and we do it efficiently with a single forward pass of the network. Our main competitors have the disadvantage of doing hundreds or thousands of forward passes through the network to generate the explanation for a single input image. That is how perturbation-based methods work. Our approach could do as well or even better. ” The team also showed the importance of extracting explanations in practice, with examples of how they can help data scientists to understand what the classifier has learned to recognize. BEST PAPER AWARD IEEE ISM

RkJQdWJsaXNoZXIy NTc3NzU=