Adversarial Attacks On Computer Vision Algorithms Using Natural Perturbations

Abstract

Verifying the correctness of intelligent embedded systems is notoriously difficult due to the use of machine learning algorithms that cannot provide guarantees of deterministic correctness. In this paper, our validation efforts demonstrate that the OpenCV Histogram of Oriented Gradients (HOG) implementation for human detection is susceptible to errors due to both malicious perturbations and naturally occurring fog phenomena. To the best of our knowledge, we are the first to explicitly employ a natural perturbation (like fog) as an adversarial attack using methods from computer graphics. Our experimental results show that computer vision algorithms are susceptible to errors under a small set of naturally occurring perturbations even if they are robust to a majority of such perturbations. Our methods and results may be of interest to the designers, developers and validation teams of intelligent cyber-physical systems such as autonomous cars.

Publication Date

2-7-2018

Publication Title

2017 10th International Conference on Contemporary Computing, IC3 2017

Volume

2018-January

Number of Pages

1-6

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/IC3.2017.8284294

Socpus ID

85046363473 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85046363473

This document is currently not available here.

Share

COinS