Trust In Autonomous Systems For Threat Analysis: A Simulation Methodology

Keywords

Autonomous systems; Cognitive processes; Simulation; Threat detection; Trust

Abstract

Human operators will increasingly team with autonomous systems in military and security settings, for example, evaluation and analysis of threats. Determining whether humans are threatening is a particular challenge to which future autonomous systems may contribute. Optimal trust calibration is critical for mission success, but most trust research has addressed conventional automated systems of limited intelligence. This article identifies multiple factors that may influence trust in autonomous systems. Trust may be undermined by various sources of demand and uncertainty. These include the cognitive demands resulting from the complexity and unpredictability of the system, “social” demands resulting from the system’s capacity to function as a team-member, and self-regulative demands associated with perceived threats to personal competence. It is proposed that existing gaps in trust research may be addressed using simulation methodologies. A simulated environment developed by the research team is described. It represents a “town-clearing” task in which the human operator teams with a robot that can be equipped with various sensors, and software for intelligent analysis of sensor data. The functionality of the simulator is illustrated, together with future research directions.

Publication Date

1-1-2018

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume

10910 LNCS

Number of Pages

341-353

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1007/978-3-319-91584-5_27

Socpus ID

85050492900 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85050492900

This document is currently not available here.

Share

COinS