CVPR Daily - Thursday

Deep neural networks running on hardware in the physical world are vulnerable to hardware fault attacks through fault-injection methods . These so far unpreventable attacks manipulate neural networks by targeting the memory where the weight parameters of the network are stored . Ozan and his co-author and advisor Robert Legenstein propose a simple approach to solve this open problem for the first time from a defensive angle. They aim to significantly improve the robustness and security of these Ozan Özdenizci is a postdoctoral researcher at the Institute of Theoretical Computer Science at the Graz University of Technology in Austria under the supervision of Robert Legenstein. He is also affiliated with the Silicon Austria Labs in Graz. His work explores the security of state-of-the-art deep neural networks. He speaks to us ahead of his oral presentation today Learning Multi-View Aggregation in the Wild for Large-Scale 3D Semantic Segmentation 26 DAILY CVPR Thursday Oral Presentation

RkJQdWJsaXNoZXIy NTc3NzU=