Quantifying Trust In Autonomous System Under Uncertainties

Abstract

Over the years, autonomous systems have entered almost all the facets of human life. Gradually, higher levels of autonomy are being incorporated into cyber-physical systems (CPS) and Internet-of-things (IoT) devices. However, safety and security has always been a lurking fear behind adoption of autonomous systems such as self-driving vehicles. To address these issues, we develop a framework for quantifying trust in autonomous system. This framework consist of an estimation method, which considers effect of adversarial attacks on sensor measurements. Our estimation algorithm uses a set-membership method during identification of safe states of the system. An important feature of this algorithm is that it can distinguish between adversarial noise and other disturbances. We also verify the autonomous system by first modeling it as networks of priced timed automata (NPTA) with stochastic semantics and then using statistical probabilistic model checking to verify it against probabilistic specifications. The verification process ensures that the autonomous system behave in accordance to safety specifications within a probabilistic threshold. For quantifying trust on the system, we use confidence results provided by the model checking tool. We have demonstrated our approach by using a case study of adaptive cruise control system under sensor spoofing attacks.

Publication Date

7-2-2016

Publication Title

International System on Chip Conference

Number of Pages

362-367

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/SOCC.2016.7905511

Socpus ID

85019058118 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85019058118

This document is currently not available here.

Share

COinS