Fast-Padma: Rapidly Adapting Facial Affect Model From Similar Individuals
Keywords
Affective computing; facial affect; rapid modeling; user-adaptive model
Abstract
A user-specific model generally performs better in facial affect recognition. Existing solutions, however, have usability issues since the annotation can be long and tedious for the end users (e.g., consumers). We address this critical issue by presenting a more user-friendly user-adaptive model to make the personalized approach more practical. This paper proposes a novel user-adaptive model, which we have called fast-Personal Affect Detection with Minimal Annotation (Fast-PADMA). Fast-PADMA integrates data from multiple source subjects with a small amount of data from the target subject. Collecting this target subject data is feasible since fast-PADMA requires only one self-reported affect annotation per facial video segment. To alleviate overfitting in this context of limited individual training data, we propose an efficient bootstrapping technique, which strengthens the contribution of multiple similar source subjects. Specifically, we employ an ensemble classifier to construct pretrained weak generic classifiers from data of multiple source subjects, which is weighted according to the available data from the target user. The result is a model that does not require expensive computation, such as distribution dissimilarity calculation or model retraining. We evaluate our method with in-depth experimental evaluations on five publicly available facial datasets, with results that compare favorably with the state-of-the-art performance on classifying pain, arousal, and valence. Our findings show that fast-PADMA is effective at rapidly constructing a user-adaptive model that outperforms both its generic and user-specific counterparts. This efficient technique has the potential to significantly improve user-adaptive facial affect recognition for personal use and, therefore, enable comprehensive affect-aware applications.
Publication Date
7-1-2018
Publication Title
IEEE Transactions on Multimedia
Volume
20
Issue
7
Number of Pages
1901-1915
Document Type
Article
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/TMM.2017.2775206
Copyright Status
Unknown
Socpus ID
85035794039 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/85035794039
STARS Citation
Huang, Michael Xuelin; Li, Jiajia; Ngai, Grace; Leong, Hong Va; and Hua, Kien A., "Fast-Padma: Rapidly Adapting Facial Affect Model From Similar Individuals" (2018). Scopus Export 2015-2019. 9072.
https://stars.library.ucf.edu/scopus2015/9072