Computer Vision News - December 2022

13 Eleonora Giunchiglia “ We annotated in the hundreds, so I was expecting some violations of the requirements, but maybe 15-20% – when I saw 89%, I was in shock! ” she recalls. “ I remember checking the results for a week and rerunning the entire pipeline multiple times because I couldn’t believe it. When working with logical constraints, most people annotate just one or two, and it’s normal for a neural network to violate these requirements once or twice. With hundreds of requirements, the percentage increases naturally, but 89% was a surprise! ” To solve this, Eleonora proposed some basic approaches to incorporate these constraints into the training or post- processing phases to make the deep learning models compliant with the requirements without dropping their performance. How can we make deep learning models safer? That is the question Eleonora poses in her award-winning paper. The answer, she says, is to start writing requirements for them as we do for standard software. “ In standard software, you have a phase where you write the requirements, and then you write software compliant with the requirements, ” she explains. “ I would like to do the same for deep learning, but you need a dataset to create deep learning models with requirements. Therefore, we propose the first dataset for autonomous driving with requirements . ” Eleonora and team annotated the dataset with requirements expressed as logical constraints but found that by taking apurely data-driven approach, irrespective of the thresholds used, 89% of the predictions violated the requirements .

RkJQdWJsaXNoZXIy NTc3NzU=