Title
Multimodal Input For Perceptual User Interfaces
Keywords
Brain-computer interface (BCI); Eye gaze; Facial expressions; Gesture; Human-computer interaction (HCI); Multimodal input; Multimodal interaction; Multimodal interface; Perceptual user interface; Speech; Touch
Abstract
The use of multiple modes of user input to interact with computers and devices is an active area of human computer interaction research. With the advent of more powerful perceptual computing technologies, multimodal interfaces that can passively sense what the user is doing are becoming more prominent. In this chapter, we examine how different natural user input modalities - specifically, speech, gesture, touch, eye gaze, facial expressions, and brain input - can be combined, and the types of interactions they afford. We also examine the strategies for combining these input modes together, otherwise known as multimodal integration or fusion. Finally, we examine some usability issues with multimodal interfaces and methods for handling them.
Publication Date
10-6-2014
Publication Title
Interactive Displays: Natural Human-Interface Technologies
Volume
9781118631379
Number of Pages
285-312
Document Type
Article; Book Chapter
Personal Identifier
scopus
DOI Link
https://doi.org/10.1002/9781118706237.ch9
Copyright Status
Unknown
Socpus ID
84927686000 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84927686000
STARS Citation
LaViola, Joseph J.; Buchanan, Sarah; and Pittman, Corey, "Multimodal Input For Perceptual User Interfaces" (2014). Scopus Export 2010-2014. 8112.
https://stars.library.ucf.edu/scopus2010/8112