Abstract

Computer vision algorithms, such as those implementing object detection, are known to be susceptible to adversarial attacks. Small barely perceptible perturbations to the input can cause vision algorithms to incorrectly classify inputs that they would have otherwise classified correctly. A number of approaches have been recently investigated to generate such adversarial examples for deep neural networks. Many of these approaches either require grey-box access to the deep neural net being attacked or rely on adversarial transfer and grey-box access to a surrogate neural network. In this thesis, we present an approach to the synthesis of adversarial examples for computer vision algorithms that only requires black-box access to the algorithm being attacked. Our attack approach employs fuzzing with features derived from the layers of a convolutional neural network trained on adversarial examples from an unrelated dataset. Based on our experimental results, we believe that our validation approach will enable designers of cyber-physical systems and other high-assurance use-cases of vision algorithms to stress test their implementations.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2017

Semester

Fall

Advisor

Jha, Sumit Kumar

Degree

Master of Science (M.S.)

College

College of Engineering and Computer Science

Department

Computer Science

Degree Program

Computer Science

Format

application/pdf

Identifier

CFE0006898

URL

http://purl.fcla.edu/fcla/etd/CFE0006898

Language

English

Release Date

December 2017

Length of Campus-only Access

None

Access Status

Masters Thesis (Open Access)

Share

COinS