Emo React: A Multimodal Approach And Dataset For Recognizing Emotional Responses In Children

Keywords

Audio- visual sensing; Emotion recognition; Facial analysis; Nonverbal behavior analysis

Abstract

Automatic emotion recognition plays a central role in the technologies underlying social robots, affect-sensitive human computer interaction design and affect-Aware tutors. Although there has been a considerable amount of research on automatic emotion recognition in adults, emotion recognition in children has been understudied. This problem is more challenging as children tend to fidget and move around more than adults, leading to more self-occlusions and non-frontal head poses. Also, the lack of publicly available datasets for children with annotated emotion labels leads most researchers to focus on adults. In this paper, we introduce a newly collected multimodal emotion dataset of children between the ages of four and fourteen years old. The dataset contains 1102 audio-visual clips annotated for 17 different emotional states: six basic emotions, neutral, valence and nine complex emotions including curiosity, uncertainty and frustration. Our experiments compare unimodal and multimodal emotion recognition baseline models to enable future research on this topic. Finally, we present a detailed analysis of the most indicative behavioral cues for emotion recognition in children.

Publication Date

10-31-2016

Publication Title

ICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction

Number of Pages

137-144

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1145/2993148.2993168

Socpus ID

85016588008 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85016588008

This document is currently not available here.

Share

COinS