Title
Toward Building Automatic Affect Recognition Machine Using Acoustics Features
Abstract
Research in the field of Affective Computing on affect recognition through speech has used a "fishing expedition" approach. Although some frameworks could achieve certain success rates, many of these approaches missed the theory behind the underlying voice and speech production mechanism. In this work, we found some correlation among the acoustic parameters (paralinguistic/non-verbal speech content) in the physiological mechanism of voice production. Furthermore, we also found some correlation when analyzing their relationships statistically. Aligned with this finding, we implemented our framework using the K-Nearest Neighbors (KNN) algorithm. Although our work is still in its infancy, we believe this context-free approach will bring us forward toward creating an intelligent agent with affect recognition ability. This paper describes the problem, our approach and our results.
Publication Date
1-1-2014
Publication Title
Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
Number of Pages
114-117
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
Copyright Status
Unknown
Socpus ID
84923888394 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84923888394
STARS Citation
Marpaung, Andreas and Gonzalez, Avelino, "Toward Building Automatic Affect Recognition Machine Using Acoustics Features" (2014). Scopus Export 2010-2014. 9125.
https://stars.library.ucf.edu/scopus2010/9125