Computer Vision News - February 2022

11 Auditing Saliency Cropping Algorithms about the computer vision work done for this research and he told us that the team has made the MGL-336 test-set accessible here . “ We are excited to see how the researchers working on saliency estimation and saliency cropping in the computer vision community will utilize this test-set in their academic explorations ”, he added. “ Besides this, during the course of WACV 2022 , we received plentiful feedback from computer vision researchers hailing from the two worlds of saliency research and we have summarized the resultant ideas and future directions of research on my github. We are hopeful that the computer vision community at large will take some interest into these exciting leads. ” Abeba says her research is interdisciplinary, ranging from the study of cognition to constructing cognitive models. More recently, she has been exploring audit work, auditing large image, text, and multimodal datasets. “ I’m getting more and more into auditing datasets , but this is not by choice, it’s more borne out of frustration! ” she reveals. “ With AI, because there’s so much focus on producing the flashiest state-of- the-art models, people tend to ignore the datasets underlying these models. But they’re crucial in how accurate and how well your model performs . ” cropping algorithms in the first place and found no consistency in the responses “ Some of reasons were even contradictory, ” she noted. “ There’s no scientifically grounded reason for creating and deploying cropping algorithms on major platforms because they’re all over the place and the science is very shaky. ” Twitter carried out its own audit and published a paper just before the team released a version of this work last year which was accepted to the BeyondFairCV workshop at CVPR 2021 . It claims to have stopped using the algorithm on its platform. Abeba says that remains to be tested but recognizes Twitter is an exemplar overall because they are doing the required work, while companies like Apple, Google, and Facebook are still a closed book. “ It’s encouraging Twitter have opened up the data, but they haven’t answered many questions about the data, ” she argues. “ It’s great they’re carrying out their own audits, but it’s important to let external auditors look at their code and algorithms. We need greater transparency. Maybe regulation is needed too to open source these algorithms. Our work marks the first audit work, but there is so much more left to do. ” We asked co-author (and old friend of the magazine) Vinay Prabhu to say a word

RkJQdWJsaXNoZXIy NTc3NzU=