Title

Adversarial Attacks And Defenses Against Deep Neural Networks: A Survey

Keywords

Adversarial examples; Deep learning; Deep neural network; Security

Abstract

Deep learning has achieved great successes in various types of applications over recent years. On the other hand, it has been found that deep neural networks (DNNs) can be easily fooled by adversarial input samples. This vulnerability raises major concerns in security-sensitive environments. Therefore, research in attacking and defending DNNs with adversarial examples has drawn great attention. The goal of this paper is to review the types of adversarial attacks and defenses, describe the state-of-the-art methods for each group, and compare their results. In addition, we present some of the top-scored competition submissions for Neural Information Processing Systems (NIPS) in 2017, their solution models, and demonstrate their results. This adversary competition was organized by Google Brain for research scientists to come up with novel solutions that generate adversarial examples and also defend against them. Its contribution is significant on this era of machine learning and DNNs.

Publication Date

1-1-2018

Publication Title

Procedia Computer Science

Volume

140

Number of Pages

152-161

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1016/j.procs.2018.10.315

Socpus ID

85061991844 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85061991844

This document is currently not available here.

Share

COinS