Can An Affect-Sensitive System Afford To Be Context Independent?
Keywords
Affect recognition; Affective computing; Context-centric; Contextual knowledge; Speech Paralinguistic
Abstract
There has been a wave of interest in affect recognition among researchers in the field of affective computing. Most of these research use a context independent approach. Since humans may misunderstand other’s observed facial, vocal, or body behavior without any contextual knowledge, we question whether any of these human-centric affect-sensitive systems can be robust enough without any contextual knowledge. To answer this question, we conducted a study using previously studied audio files in three different settings; these include: no contextual indication, one level of contextual knowledge (either action or relationship/environment), and two levels of contextual knowledge (both action and relationship/environment). Our work confirms that indeed the contextual knowledge can improve recognition of human emotion.
Publication Date
1-1-2017
Publication Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
10257 LNAI
Number of Pages
454-467
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1007/978-3-319-57837-8_38
Copyright Status
Unknown
Socpus ID
85020897256 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/85020897256
STARS Citation
Marpaung, Andreas and Gonzalez, Avelino, "Can An Affect-Sensitive System Afford To Be Context Independent?" (2017). Scopus Export 2015-2019. 7400.
https://stars.library.ucf.edu/scopus2015/7400