Achieving The Vision Of Effective Soldier-Robot Teaming: Recent Work In Multimodal Communication
Keywords
human-agent teaming; human-robot interaction; multimodal interfaces
Abstract
The U.S. Army Research Laboratory (ARL) Autonomous Systems Enterprise has a vision for the future of effective Soldier-robot teaming. Our research program focuses on three primary thrust areas: communications, teaming, and shared cognition. Here we discuss a recent study in communications, where we collected data using a multimodal interface comprised of speech, gesture, touch and a visual display to command a robot to perform semantically-based tasks. Observations on usability and participant expectations with respect to the interaction with the robot were obtained. Initial observations are reported, showing that the speech-gesture-visual multimodal interface was liked and performed well. Areas for improvement were noted.
Publication Date
3-2-2015
Publication Title
ACM/IEEE International Conference on Human-Robot Interaction
Volume
02-05-March-2015
Number of Pages
177-178
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1145/2701973.2702026
Copyright Status
Unknown
Socpus ID
84969142723 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84969142723
STARS Citation
Hill, Susan G.; Barber, Daniel; and Evans, Arthur W., "Achieving The Vision Of Effective Soldier-Robot Teaming: Recent Work In Multimodal Communication" (2015). Scopus Export 2015-2019. 1844.
https://stars.library.ucf.edu/scopus2015/1844