Context-Sensitive Single-Modality Image Emotion Analysis: A Unified Architecture From Dataset Construction To Cnn Classification

Keywords

Context-Sensitive Emotion Analysis; Emotion Recognition; Single-Modality Classification

Abstract

Still image emotion recognition is receiving increasing attention in recent years due to the tremendous amount of social media content on the Web. Opinion mining, visual emotion analysis, search and retrieval are among the application areas to name a few. Works are published on the subject, offering methods to detect image sentiments, while others focus on extracting the true social signals, such as happiness and anger, among others. However 'context-sensitive' emotion recognition has been by and large discarded in the literature so far. Moreover, the problem in the single-modal domain; i.e. using only still images, remains less attended. In this work, we introduce the largest dataset of images collected from the wild, UCF ER, labeled with emotion and context. We train a context-sensitive classifier to classify images based on both emotion and context, hence introducing the first single-modal context-sensitive emotion recognition CNN model trained on our newly constructed dataset. Relying on our categorical approach to emotion recognition, we claim and show that including context as part of a unified training process helps boost performance, while reducing dependency on cross-modality approaches. Experimental results demonstrate considerable boost in performance compared to state-of-the-art.

Publication Date

8-29-2018

Publication Title

Proceedings - International Conference on Image Processing, ICIP

Number of Pages

1932-1936

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/ICIP.2018.8451048

Socpus ID

85062913798 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85062913798

This document is currently not available here.

Share

COinS