Abstract
Representation learning is a fundamental pillar of artificial intelligence, enabling models to extract and encode meaningful patterns from complex data into compact and informative representations. As a driving force behind the success of deep learning, effective representation learning empowers a wide array of applications, from computer vision and natural language processing to speech recognition and reinforcement learning, thereby advancing the capabilities of intelligent systems and their impact on society. We have gained significant progress by including big and curated data in training Deep Neural Networks (DNNs). However, the increasing labeling demand is expensive and time-consuming. As an alternative, semi-supervised or unsupervised learning approaches could generate high-quality representations. In the first topic of this dissertation, we focus on learning representations in a semi-supervised or self-supervised manner. Our research introduces a semi-supervised two-stage model that enables direct learning from noisy labels and facilitates the acquisition of high-quality representations for image classification tasks. Additionally, we suggest the utilization of a self-supervised model for addressing the audio-visual speaker diarization problem with an improved loss function. In the next topic, we explore the transferability of pre-trained DNNs to downstream tasks. We propose a relation transfer architecture for achieving domain adaptation in the referring expression grounding problem. We further investigate the robustness of learned representations and propose to prevent adversarial attacks on DNNs with attack-defendable neural network architectures. Finally, we calibrate the outputs of deep neural networks to improve the quality of uncertainty assessments. The dissertation also compares the proposed methods with other state-of-the-art approaches in the experiments. Overall, our research has the potential to enhance the efficiency, transferability, interpretability, and security of DNNs, contributing to the development of more powerful and trustworthy artificial intelligence systems.
Notes
If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu.
Graduation Date
2023
Semester
Spring
Advisor
Wang, Liqiang
Degree
Doctor of Philosophy (Ph.D.)
College
College of Engineering and Computer Science
Department
Computer Science
Degree Program
Computer Science
Identifier
CFE0009853; DP0028132
URL
https://purls.library.ucf.edu/go/DP0028132
Language
English
Release Date
November 2024
Length of Campus-only Access
1 year
Access Status
Doctoral Dissertation (Campus-only Access)
STARS Citation
Ding, Yifan, "Representation Learning in Deep Neural Networks" (2023). Electronic Theses and Dissertations, 2020-2023. 1882.
https://stars.library.ucf.edu/etd2020/1882
Restricted to the UCF community until November 2024; it will then be open access.