Keywords
Image Authentication, Manipulation Detection, Adversarial Attacks Detection, Fake News Detection, Cybersecurity, Computer Vision
Abstract
The great advance of communication technology comes with a rapid increase of disinformation in many kinds and shapes; manipulated images are one of the primary examples of disinformation that can affect many users. Such activity can severely impact public behavior, attitude, and belief or sway the viewers' perception in any malicious or benign direction. Additionally, adversarial attacks targeting deep learning models pose a severe risk to computer vision applications. This dissertation explores ways of detecting and resisting manipulated or adversarial attack images. The first contribution evaluates perceptual hashing (pHash) algorithms for detecting image manipulation on social media platforms like Facebook and Twitter. The study demonstrates the differences in image processing between the two platforms and proposes a new approach to find the optimal detection threshold for each algorithm. The next contribution develops a new pHash authentication to detect fake imagery on social media networks, using a self-supervised learning framework and contrastive loss. In addition, a fake image sample generator is developed to cover three major image manipulating operations (copy-move, splicing, removal). The proposed authentication technique outperforms the state-of-the-art pHash methods. The third contribution addresses the challenges of adversarial attacks to deep learning models. A new adversarial-aware deep learning system is proposed using a classical machine learning model as the secondary verification system to complement the primary deep learning model in image classification. The proposed approach outperforms current state-of-the-art adversarial defense systems. Finally, the fourth contribution fuses big data from Extra-Military resources to support military decision-making. The study proposes a workflow, reviews data availability, security, privacy, and integrity challenges, and suggests solutions. A demonstration of the proposed image authentication is introduced to prevent wrong decisions and increase integrity. Overall, the dissertation provides practical solutions for detecting manipulated and adversarial attack images and integrates our proposed solutions in supporting military decision-making workflow.
Completion Date
2023
Semester
Fall
Committee Chair
Zou, Changchun
Degree
Doctor of Philosophy (Ph.D.)
College
College of Engineering and Computer Science
Department
Electrical and Computer Engineering
Degree Program
Computer Engineering
Format
application/pdf
Identifier
DP0028004
URL
https://purls.library.ucf.edu/go/DP0028004
Language
English
Release Date
December 2024
Length of Campus-only Access
1 year
Access Status
Doctoral Dissertation (Campus-only Access)
Campus Location
Orlando (Main) Campus
STARS Citation
Alkhowaiter, Mohammed, "Detecting Manipulated and Adversarial Images: A Comprehensive Study of Real-world Applications" (2023). Graduate Thesis and Dissertation 2023-2024. 53.
https://stars.library.ucf.edu/etd2023/53
Restricted to the UCF community until December 2024; it will then be open access.