With the advancement of accelerated hardware in recent years, there has been a surge in the development and application of intelligent systems. Deep learning systems, in particular, have shown exciting results in a wide range of tasks: classification, detection, and recognition. Despite these remarkable achievements, there remains an active research area that aims to increase the robustness of those systems in critical domains. Deep learning algorithms have proven to be brittle against adversarial attacks. That is, carefully crafted adversarial inputs can consistently trigger an erroneous prediction from a network model. Hence the motivation of this dissertation, we study prominent adversarial attacks to formulate an understanding of the blind spots in these classes of algorithms. Then we leverage network interpretability methods to propose a computational model that quantifiably measures the confidence score of deep neural networks (DNNs). This method, codenamed network attribution confidence (NAC), computes the derivative of neuron activation changes to assign scores to input features. This confidence metric enables us to develop a framework GAAD that serves as an attack detector. Based on these fundamental intuitions, we explore the area of explainable artificial intelligence (XAI). Our research has enabled us to expand the literature in the space by proposing a novel method for visually interpretable concept-based explanations (VICE). We motivated our findings by using various deep learning models and benchmark datasets to achieve state-of-the-art accuracy.
If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu
Doctor of Philosophy (Ph.D.)
College of Engineering and Computer Science
Length of Campus-only Access
Doctoral Dissertation (Open Access)
Michel, Andy, "Towards Enabling Explanation in Safety-Critical Artificial Intelligence Systems" (2021). Electronic Theses and Dissertations, 2020-. 1337.