Keywords

Adversarial Robustness; Neural Networks; Attribution Analysis

Abstract

Computer vision algorithms, including image classifiers and object detectors, play a pivotal role in various cyber-physical systems, spanning from facial recognition to self-driving vehicles and security surveillance. However, the emergence of real-world adversarial patches, which can be as simple as stickers, poses a significant threat to the reliability of AI models utilized within these systems. To address this challenge, several defense mechanisms such as PatchGuard, Minority Report, and (De)Randomized Smoothing have been proposed to enhance the resilience of AI models against such attacks. In this thesis, we introduce a novel framework that integrates masking with attribution analysis to robustify AI models against adversarial patch assaults. Attribution analysis identifies the crucial pixels influencing the model's decision-making process. Subsequently, inspired by the Derandomized Smoothing defense strategy, we employ a masking approach to mask these important pixels. Our experimental findings demonstrate improved robustness against adversarial attacks, at the expense of a slight degradation in clean accuracy.

Thesis Completion Year

2024

Thesis Completion Semester

Spring

Thesis Chair

Ewetz, Rickard

College

College of Engineering and Computer Science

Department

Department of Computer Science

Thesis Discipline

Computer Science

Language

English

Access Status

Open Access

Length of Campus Access

None

Campus Location

Orlando (Main) Campus

Share

COinS