Using Agent Transparency To Support Situation Awareness Of The Autonomous Squad Member

Keywords

Human-Robot interaction; Situation awareness; Transparency; Trust

Abstract

Agent transparency has been proposed as a solution to the problem of facilitating operators’ situation awareness in human-robot teams. Sixty participants performed a dual monitoring task, monitoring both an intelligent, autonomous robot teammate and performing threat detection in a virtual environment. The robot displayed four different interfaces, corresponding to information from the Situation awareness-based Agent Transparency (SAT) model. Participants’ situation awareness of the robot, confidence in their situation awareness, trust in the robot, workload, cognitive processing, and perceived usability of the robot displays were assessed. Results indicate that participants using interfaces corresponding to higher SAT level had greater situation awareness, cognitive processing, and trust in the robot than when they viewed lower level SAT interfaces. No differences in workload or perceived usability of the display were detected. Based on these findings, we observed that transparency has a significant effect on situation awareness, trust, and cognitive processing.

Publication Date

12-1-2017

Publication Title

Cognitive Systems Research

Volume

46

Number of Pages

13-25

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1016/j.cogsys.2017.02.003

Socpus ID

85017417829 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85017417829

This document is currently not available here.

Share

COinS