Abstract

Convolutional Neural Networks (CNNs) have been at the frontier of the revolution within the field of computer vision. Since the advent of AlexNet in 2012, neural networks with CNN architectures have surpassed human-level capabilities for many cognitive tasks. As the neural networks are integrated in many safety critical applications such as autonomous vehicles, it is critical that they are robust and resilient to errors. Unfortunately, it has recently been observed that deep neural network models are susceptible to adversarial perturbations which are imperceptible to human vision. In this thesis, we propose a solution to defend neural networks against white box adversarial attacks. The proposed defense is based on activation pattern analysis in the frequency domain. The technique is evaluated and compared with state-of-the-art techniques on the CIFAR-10 dataset.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2022

Semester

Summer

Advisor

Ewetz, Rickard

Degree

Master of Science (M.S.)

College

College of Engineering and Computer Science

Department

Computer Science

Degree Program

Computer Science

Identifier

CFE0009258; DP0026862

URL

https://purls.library.ucf.edu/go/DP0026862

Language

English

Release Date

August 2022

Length of Campus-only Access

None

Access Status

Masters Thesis (Open Access)

Share

COinS