Computer Vision News - October 2020

Research 6 For evaluation purposes, the attack difficulty is measured by the least maximum perturbation required for most attacks to succeed. The maximum perturbation size ε is varied from 0.2/255 to 5/255 and the drop-in model accuracy is calculated on the adversarial examples. Adversarial Detection After the adversarial attacks are generated, the authors conduct adversarial detection following the scheme below. Features are extracted from different layers of a DNN in order to discriminate the adversarial examples (positive class) from the normal clean examples (negative class). Figure 2: pipeline of adversarial attacks generation Figure 3: pipeline of adversarial detection

RkJQdWJsaXNoZXIy NTc3NzU=