Computer Vision News - August 2023

Computer Vision News 2 Trusted Media One of the consequences of AI democratization and the growing emergence of generative models is the rise of deepfakes. Through its work to determine the authenticity of media content, Trusted Media aims not simply to identify artifacts of fakery, including broken hands and symmetry issues, but to answer a deeper question: Is there an inherent watermark in being human? “The first thing we look at is blood,” Ilke tells us. “When our heart pumps blood, it goes to our veins, and they change color. That color change is called photoplethysmography (PPG). We collect those PPG signals from the face, look at their temporal, spectral, and spatial correlations, and create PPG maps. On top of those, we train a deep neural network to classify them into fake and real videos!” This technology is called FakeCatcher, one of several deepfake detectors developed by the team. Others examine whether eye gaze remains consistent over time and whether the motion in a video aligns with natural human movements. Then there is multimodal detection, such as exploring correlations between head movements and voice changes. Deepfakes extend into scene and Ilke Demir is a Senior Staff Research Scientist at Intel Labs, leading the Trusted Media team, working on manipulated content detection, responsible generative AI, and media provenance. Real-time Deepfake Detection Platform

RkJQdWJsaXNoZXIy NTc3NzU=