Quantifying engagement: Measuring player involvement in human-avatar interactions
Abbreviated Journal Title
Comput. Hum. Behav.
Engagement; Involvement; Human-avatar interactions; Simulation; GAMES; CHALLENGES; COMPUTER; LEARNERS; DESIGN; Psychology, Multidisciplinary; Psychology, Experimental
This research investigated the merits of using an established system for rating behavioral cues of involvement in human dyadic interactions (i.e., face-to-face conversation) to measure involvement in human-avatar interactions. Gameplay audio-video and self-report data from a Feasibility Trial and Free Choice study of an effective peer resistance skill building simulation game (DRAMA-RAMA (TM)) were used to evaluate reliability and validity of the rating system when applied to human-avatar interactions. The Free Choice study used a revised game prototype that was altered to be more engaging. Both studies involved girls enrolled in a public middle school in Central Florida that served a predominately Hispanic (greater than 80%), low-income student population. Audio-video data were coded by two raters, trained in the rating system. Self-report data were generated using measures of perceived realism, predictability and flow administered immediately after game play. Hypotheses for reliability and validity were supported: reliability values mirrored those found in the human dyadic interaction literature. Validity was supported by factor analysis, significantly higher levels of involvement in Free Choice as compared to Feasibility Trial players, and correlations between involvement dimension sub scores and self-report measures. Results have implications for the science of both skill-training intervention research and game design. (C) 2014 Elsevier Ltd. All rights reserved.
Computers in Human Behavior
"Quantifying engagement: Measuring player involvement in human-avatar interactions" (2014). Faculty Bibliography 2010s. 5899.