The recent advances in the field of Artificial Intelligence (AI), particularly Generative Adversarial Networks (GANs) and an abundance of training samples along with robust computational resources have significantly propelled the field of AI-generated fake information in all kinds, e.g., deepfakes. Deepfakes are among the most sinister types of misinformation, posing large-scale and severe security and privacy risks targeting critical governmental institutions and ordinary people across the world. The fact that deepfakes are AI-generated digital content and not actual events captured by a camera implies that they still can be detected using advanced AI models. Although the deepfake detection task has gained massive attention within the last couple of years, the mainstream detection frameworks mainly rely on Convolutional Neural Networks (CNNs). In deepfake detection tasks, it is critically important to successfully identify forged pixels to extract better discriminative features in a scalable manner. One of our works demonstrated that the performance of the CNN models could be improved through attention-based mechanisms, which forces the model to learn more discriminative features. Although CNNs have proven themselves solid candidates for learning local information of the image, they still miss capturing pixels' spatial interdependence due to constrained receptive fields. While CNNs fail to learn relative spatial information and lose essential data in pooling layers, vision transformers' global attention mechanism enables the network to learn higher-level information much faster. Therefore, a multi-stream deepfake detection framework is presented that incorporates pixels' spatial interdependence in a global context with local image features in a scalable scheme using unique characteristics of transformer models on learning the global relationship of pixels. Furthermore, this work proposes a framework at the intersection of graph theory, attention analysis, and vision transformers to overcome the shortcomings of previous approaches. The successful outcome of this study will help better detection of deepfakes in less computational cost compared to previous studies.


If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date





Yuan, Jiann-Shiun


Doctor of Philosophy (Ph.D.)


College of Engineering and Computer Science


Electrical and Computer Engineering

Degree Program

Computer Engineering


CFE0009830; DP0027771





Release Date

June 2024

Length of Campus-only Access

1 year

Access Status

Doctoral Dissertation (Open Access)