Defense GAN & Physical Adversarial Examples

Deep Learning

Abstract

This project discuss the transferability of state of the art defense techniques for adversarial examples for deep learning systems in the physical domain. The paper explores using adversarial attacks using the Fast Gradient Sign Method (FGSM), Carlini & Wagner (CW) and DeepFool attacks to generate adversarial images that are given to the classifier as a digital and physically transformed image. Furthermore, we present novel results demonstrating the effectiveness of the stateof-the-art Defense-GAN technique to create reconstructions of images, that have undergone the physical transformation, with a significant portion of the adversarial noise filtered out. We also show, that for finer adversarial attacks, that the physical transformation itself causes a high degree of adversarial destruction, bringing to question the need for additional defenses.

Adversarial Examples

Attacks used to generated Physical Adversarila Examples

Experimentail Setup

  • NVidia Tesla P4 - 8GB DDR5 GPU Memory
  • 13 GB of RAM
  • 2 vCPUs
  • Logitech c922x Webcam
Karthik Bhaskar
Karthik Bhaskar
Machine Learning Researcher | Data Scientist | Software Engineer

Machine Learning Researcher | Software Engineer | Vector Institute | University of Toronto | University Health Network

Related