ORCID

0000-0002-3092-6272

Keywords

Responsible AI, Human-centered AI, Fairness in AI

Abstract

In recent years, the widespread adoption of machine learning (ML) has driven the expansion of automated decision-making across various real-world applications. While ML models improve efficiency and predictive accuracy in domains such as healthcare, finance, and criminal justice, biases inherited from training data can potentially lead to unintended disparities. Growing awareness of these biases has raised concerns within the human-centered AI and responsible AI communities, highlighting the importance of promoting fairness in AI systems. Aiming for equitable outcomes in automated decisions is not only a technical goal but also an ethical consideration, as models influenced by biases may inadvertently reinforce societal inequalities and lead to disparities among different groups.

This dissertation presents four key frameworks to enhance fairness in AI models. In the first part, we propose an ensemble learning-based framework that leverages multiple deep learning models with different sampling strategies to improve fairness. We then develop a fair representation learning framework that removes sensitive information while preserving relevant non-sensitive features, ensuring adaptable representations across classification tasks. Next, we introduce a contrastive learning framework that employs supervised and self-supervised strategies to mitigate bias in tabular datasets by strategically selecting positive pair samples. Finally, we investigate fairness in large language models (LLMs), assessing their susceptibility to social biases in zero-shot, few-shot, and fine-tuned settings while exploring mitigation strategies. This dissertation contributes to the advancement of responsible AI by introducing novel methodologies that address bias at different stages of model training and evaluation. Our work helps improve fairness-aware learning techniques, fostering more inclusive AI-driven decision-making and provides insights into mitigating disparities in machine learning models.

Completion Date

2025

Semester

Spring

Committee Chair

Ozlem, Garibay

Degree

Doctor of Philosophy (Ph.D.)

College

College of Engineering and Computer Science

Department

Industrial Engineering and Management Systems(IEMS)

Identifier

DP0029405

Document Type

Dissertation/Thesis

Campus Location

Orlando (Main) Campus

Share

COinS