12 DAILY WACV Saturday Poster Presentation The authors are interested in the prototype learning concept for interpretability in case of semantic segmentation. They saw this model for interpretable semantic image segmentation that is called ProtoSeg. When they read it, they realized that one of the main issues was not integrating scale into this interpretability process. How can we bring scale into prototype learning? Does it help performance and does it help interpretability? “The whole paper is about this,” reveals Hugo, “and building an architecture around this principle. I came up with several ideas on the topic of prototype for semantic segmentation to my PI and discussing with him we realized that this would be the most interesting direction to look for!” Of course, we need interpretability because we need to understand the decision process of black box models. This is all the most true if we want to use AI for critical tasks when human lives are at stake, like for medical tasks. In this specific case, Hugo is using it for wild fires. If we want to use this, we need to be able to understand the decision process of the model, without being worried about any type of errors that could impact the users. The evaluation process of the method was the major challenge in this work. Hugo thinks they have a fairly better performance of the method in terms of IU accuracy. But it's hard to convey to the reviewers why this would be better in terms of interpretability. “I think this was the most difficult part,” claims Hugo, “because there Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation Hugo Porta is a PhD student at EPFL in Lausanne. Hugo is also the first author of a paper that was accepted as a poster at WACV 2025.
RkJQdWJsaXNoZXIy NTc3NzU=