Convolutional Neural Networks (CNNs) have been at the frontier of the revolution within the field of computer vision. Since the advent of AlexNet in 2012, neural networks with CNN architectures have surpassed human-level capabilities for many cognitive tasks. As the neural networks are integrated in many safety critical applications such as autonomous vehicles, it is critical that they are robust and resilient to errors. Unfortunately, it has recently been observed that deep neural network models are susceptible to adversarial perturbations which are imperceptible to human vision. In this thesis, we propose a solution to defend neural networks against white box adversarial attacks. The proposed defense is based on activation pattern analysis in the frequency domain. The technique is evaluated and compared with state-of-the-art techniques on the CIFAR-10 dataset.
If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu
Master of Science (M.S.)
College of Engineering and Computer Science
Length of Campus-only Access
Masters Thesis (Open Access)
Shah, Sharvil, "Methods For Defending Neural Networks Against Adversarial Attacks" (2022). Electronic Theses and Dissertations, 2020-. 1287.