WACV 2024 Daily - Sunday‏

9 DAILY WACV Sunday Uncertainty Quantification Deep learning techniques have become increasingly popular in recent years, demonstrating remarkable results across various domains. Until recently, the only question that tended to be asked of them was: Can we improve their performance? However, a new critical question has emerged with the rapidly evolving landscape of deep neural networks: Are they reliable? “It turns out they’re not totally reliable,” Gianni tells us. “They have good results, but sometimes they have wrong results, and we don’t know why. It’s important to know if we can trust them. A colleague of mine once tried ChatGPT. He asked, ‘Can we buy – and he invented a word – in a supermarket?’ ChatGPT says, ‘Yes, of course, we can buy it!’ It can hallucinate a new word, but it’s a fake word. With a reliable confidence score, we could say, I don’t know this word. In some sense, we could know if we can trust ChatGPT or whatever.” The community’s focus has traditionally been on accuracy and improving the performance of neural networks through increased size, architectural changes, and larger datasets. Reliability has not previously been a prominent concern, with very few papers on the topic and the absence of a dedicated track at most major computer vision conferences. Gianni aims to rectify this in his hands-on tutorial by providing a comprehensive introduction to uncertainty theory and its practical applications. “During this tutorial, we’ll introduce all the basics,” he reveals. “Also, with Google Collab

RkJQdWJsaXNoZXIy NTc3NzU=