Keywords

Federated Learning, Self-supervised, Geospatial, Diffusion Models

Abstract

In the rapidly advancing field of computer vision, deep learning has driven significant technological transformations. However, the widespread deployment of these technologies often encounters efficiency challenges, such as high memory usage, demanding computational resources, and extensive communication overhead. Efficiency has become crucial for both centralized and distributed applications of deep learning, ensuring scalability, real-world applicability, and broad accessibility. In distributed settings, federated learning (FL) enables collaborative model training across multiple clients while maintaining data privacy. Despite its promise, FL faces challenges due to clients' constraints in memory, computational power, and bandwidth. Centralized training systems also require high efficiency, where optimizing compute resources during training and inference, as well as label efficiency, can significantly impact the performance and practicality of such models. Addressing these efficiency challenges in both federated learning and centralized training systems promises to provide significant advancements, enabling more extensive and effective deployment of machine learning models across various domains.

To this end, this dissertation addresses many key challenges. First, in federated learning, a novel method is introduced to optimize local model performance while reducing memory and computational demands. Additionally, a novel approach is presented to reduce communication costs by minimizing model update frequency across clients through the use of generative models. In the centralized domain, this dissertation further develops a novel training paradigm for geospatial foundation models using a multi-objective continual pretraining strategy. This improves label efficiency and significantly reduces computational requirements for training large-scale models. Overall, this dissertation advances deep learning efficiency by improving memory utilization, computational demands, and communication efficiency, essential for scalable and effective application of deep learning in both distributed and centralized environments.

Completion Date

2024

Semester

Summer

Committee Chair

Chen, Chen

Degree

Doctor of Philosophy (Ph.D.)

College

College of Engineering and Computer Science

Department

Computer Science

Degree Program

Computer Science

Format

application/pdf

Identifier

DP0028485

URL

https://purls.library.ucf.edu/go/DP0028485

Language

English

Release Date

8-15-2024

Length of Campus-only Access

None

Access Status

Doctoral Dissertation (Open Access)

Campus Location

Orlando (Main) Campus

Accessibility Status

Meets minimum standards for ETDs/HUTs

Share

COinS