Abstract

Neural network's ability to model data patterns proved to be immensely useful in a plethora of practical applications. However, using the physical world's data can be problematic since it is often cluttered, crowded with scattered insignificant patterns, contain unusual compositions, and widely infiltrated with biases and imbalances. Consequently, training a neural network to find meaningful patterns in seas of chaotic data points becomes virtually as hard as finding a needle in a haystack. Specifically, attempting to simulate real-world multi-modal noisy distributions with high precision leads the network to learn an ill-informed inference distribution. In this work, we discuss four techniques to mitigate common discrepancies between real-world representations and the training distribution learned by the network. Namely, we address the techniques of Diverse sampling, objective generalization, domain, and task adaptation being introduced as priors in learning the primary objective. For each of these techniques, we contrast the basic training where no prior is applied to the learning with our proposed method and show the advantage of guiding the training distribution to the critical patterns in real-world data using our suggested approaches. We examine those discrepancy-mitigation techniques on a variety of vision tasks ranging from image generation and retrieval to video summarization and actionness ranking.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2020

Semester

Fall

Advisor

Wang, Liqiang

Degree

Doctor of Philosophy (Ph.D.)

College

College of Engineering and Computer Science

Department

Computer Science

Degree Program

Computer Science

Format

application/pdf

Identifier

CFE0008320; DP0023757

Language

English

Release Date

December 2020

Length of Campus-only Access

None

Access Status

Doctoral Dissertation (Open Access)

Share

COinS