Situation Awareness-Based Agent Transparency And Human-Autonomy Teaming Effectiveness
Keywords
autonomy; bidirectional communication; human-autonomy teaming; human–robot interaction; Transparency
Abstract
Effective collaboration between humans and agents depends on humans maintaining an appropriate understanding of and calibrated trust in the judgment of their agent counterparts. The Situation Awareness-based Agent Transparency (SAT) model was proposed to support human awareness in human–agent teams. As agents transition from tools to artificial teammates, an expansion of the model is necessary to support teamwork paradigms, which require bidirectional transparency. We propose that an updated model can better inform human–agent interaction in paradigms involving more advanced agent teammates. This paper describes the model's use in three programmes of research, which exemplify the utility of the model in different contexts–an autonomous squad member, a mediator between a human and multiple subordinate robots, and a plan recommendation agent. Through this review, we show that the SAT model continues to be an effective tool for facilitating shared understanding and proper calibration of trust in human–agent teams.
Publication Date
5-4-2018
Publication Title
Theoretical Issues in Ergonomics Science
Volume
19
Issue
3
Number of Pages
259-282
Document Type
Article
Personal Identifier
scopus
DOI Link
https://doi.org/10.1080/1463922X.2017.1315750
Copyright Status
Unknown
Socpus ID
85042744985 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/85042744985
STARS Citation
Chen, Jessie Y.C.; Lakhmani, Shan G.; Stowers, Kimberly; Selkowitz, Anthony R.; and Wright, Julia L., "Situation Awareness-Based Agent Transparency And Human-Autonomy Teaming Effectiveness" (2018). Scopus Export 2015-2019. 9247.
https://stars.library.ucf.edu/scopus2015/9247