A Multimodal Interface For Real-Time Soldier-Robot Teaming
Keywords
Automated Speech Recognition; Human Robot Interaction; Multimodal Communication; Natural Language Understanding
Abstract
Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.
Publication Date
1-1-2016
Publication Title
Proceedings of SPIE - The International Society for Optical Engineering
Volume
9837
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1117/12.2224401
Copyright Status
Unknown
Socpus ID
84987842223 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84987842223
STARS Citation
Barber, Daniel J.; Howard, Thomas M.; and Walter, Matthew R., "A Multimodal Interface For Real-Time Soldier-Robot Teaming" (2016). Scopus Export 2015-2019. 4147.
https://stars.library.ucf.edu/scopus2015/4147