Neural networks (NN) have become a central component in most machine learning systems. However, studies have shown that these models are not robust against adversarial attacks. As such, in this dissertation, we explore four directions. In the first direction, we investigate adversarial attacks on two hierarchical classification (HC) models: the Flat HC (FHC), and the Top-Down HC (TDHC). In particular, we formulate attacks against these models by using convex programming. Through experimental results, it is shown that FHCs are more robust than TDHCs. Second, we formalize a new notion of coarse robustness that is defined with respect to a specified grouping of the class labels. We propose a training mechanism that incorporates the coarse label information in addition to the finer ones, and empirically and theoretically show that this mechanism improves the proposed notion of coarse robustness. The third direction is the Bidirectional One-Shot Synthesis (BOSS) problem for synthesizing adversarial examples using structures similar to generative adversarial networks. However, BOSS does not require the use of any training data. In particular, we explore solutions where the generated data must simultaneously satisfy input/output user-defined constraints. We prove that the BOSS problem is NP-complete, and experimentally verify that the our method either outperforms or performs on par with the state-of-the-art methods. Subsequently, for the fourth direction, we extend the synthesis problem of adversarial attacks to solving the Maximum Independent Set (MIS) problem. This is accomplished by presenting NN structures derived with respect to finding MISs in the graph, where no data is required for training the neural networks that produce the solution. Experimental results on various graphs demonstrate that our proposed method performs on par or outperforms state-of-the-art learning-based methods without requiring any training data.
If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu
Doctor of Philosophy (Ph.D.)
College of Engineering and Computer Science
Electrical and Computer Engineering
Length of Campus-only Access
Doctoral Dissertation (Open Access)
Alkhouri, Ismail, "Adversarial Attacks, Coarse Robustness, and Dataless Neural Networks: Novel Techniques for Improved Classification and Combinatorial Optimization" (2023). Electronic Theses and Dissertations, 2020-. 1500.