ORCID

0000-0003-2830-6960

Keywords

Adversarial Defense, Adversarial Attack, Diffusion Model, EBM

Abstract

This dissertation proposes novel Energy-Based Model (EBM) learning strategies and adversarial robustness evaluation framework that addresses critical gaps in iterative purification defenses. First, we introduce strategies to learn EBMs by controlling MCMC sampling trajectory lengths for distinct applications: (i) short-run sampling for state-of-the-art unconditional image generation on CIFAR-10 and ImageNet; (ii) mid-run sampling for classifier-agnostic adversarial purification; and (iii) long-run sampling for principled density modeling. Using three novel MCMC initialization techniques within standard Maximum Likelihood objectives, we enable significant performance gains without architectural modifications. Second, we address evaluation deficiencies in existing iterative stochastic defenses by developing the \textit{Memory Efficient Full-gradient Attacks} (MEFA) framework, which overcomes gradient instability and memory constraints in white-box attacks. Demonstrating with diffusion- and EBM-based purification, MEFA exposes the limitations of these defenses against adversaries, overlooked stochasticity on robustness metrics, and achieves state-of-the-art $\ell_{\infty}$ and $\ell_2$ white-box attack success rates. Out-of-distribution robustness is also being evaluated with MEFA. Collectively, this dissertation establishes foundational advances in both generative model training and rigorous adversarial robustness evaluation.

Completion Date

2025

Semester

Fall

Committee Chair

Cai, HanQin

Degree

Doctor of Philosophy (Ph.D.)

College

College of Sciences

Department

Statistics and Data Science

Format

PDF

Identifier

DP0029821

Document Type

Thesis

Campus Location

Orlando (Main) Campus

Share

COinS