CVPR Daily - Tuesday

“ We wanted something that could be used in the real world because acquiring annotation, which is usually very expensive, is a challenge we all face. We also wanted to create a model that could incrementally extend its internal knowledge over time. ” Fabio tells us that finding a jumping-off point for developing the model was the most challenging part: “ We tried lots of different things in the beginning. You won’t read about those in the paper because they’re failed experiments! The tough part was finding something that might work, even if not that well, but just as a starting point to develop our method. ” The team mixed two well-known problems: incremental or continual learning and weakly supervised segmentation . Continual learning is a fundamental idea in machine learning, where you want to extend your model over time. It has been investigated in image classification and recently in semantic segmentation. Weakly supervised semantic segmentation is also an old task, but this is the first time both tasks have been combined. The team hopes it will become a new field. To train incrementally over time, the model uses the knowledge of previous timestamps to maintain the knowledge of the classes that it has learned earlier to avoid forgetting them. The team uses techniques like segmentation loss , which allow the model to predict a label for each single pixel, and in this specific scenario, there is a common solution that averages the model’s features to be trained at the image-level label. “ We use a technique called CAM, or Class Activation Maps , and make an average of the scores on the images to train them on the image-level labels, ” Fabio explains. 13 DAILY CVPR Tuesday Fabio Cermelli and Dario Fontanel

RkJQdWJsaXNoZXIy NTc3NzU=