Computer vision algorithms, such as those implementing object detection, are known to be susceptible to adversarial attacks. Small barely perceptible perturbations to the input can cause vision algorithms to incorrectly classify inputs that they would have otherwise classified correctly. A number of approaches have been recently investigated to generate such adversarial examples for deep neural networks. Many of these approaches either require grey-box access to the deep neural net being attacked or rely on adversarial transfer and grey-box access to a surrogate neural network. In this thesis, we present an approach to the synthesis of adversarial examples for computer vision algorithms that only requires black-box access to the algorithm being attacked. Our attack approach employs fuzzing with features derived from the layers of a convolutional neural network trained on adversarial examples from an unrelated dataset. Based on our experimental results, we believe that our validation approach will enable designers of cyber-physical systems and other high-assurance use-cases of vision algorithms to stress test their implementations.
If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu
Jha, Sumit Kumar
Master of Science (M.S.)
College of Engineering and Computer Science
Length of Campus-only Access
Masters Thesis (Open Access)
Michel, Andy, "Adversarial Attacks On Vision Algorithms Using Deep Learning Features" (2017). Electronic Theses and Dissertations, 2004-2019. 5675.