Title

Sat Ya: Defending Against Adversarial Attacks Using Statistical Hypothesis Testing

Abstract

The paper presents a new defense against adversarial attacks for deep neural networks. We demonstrate the effectiveness of our approach against the popular adversarial image generation method DeepFool. Our approach uses Wald’s Sequential Probability Ratio Test to sufficiently sample a carefully chosen neighborhood around an input image to determine the correct label of the image. On a benchmark of 50,000 randomly chosen adversarial images generated by DeepFool we demonstrate that our method SAT YA is able to recover the correct labels for 95.76% of the images for CaffeNet and 97.43% of the correct label for GoogLeNet.

Publication Date

1-1-2018

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume

10723 LNCS

Number of Pages

277-292

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1007/978-3-319-75650-9_18

Socpus ID

85042553265 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85042553265

This document is currently not available here.

Share

COinS