Keywords

Mental models, human robot interaction, transparency, robot readability, trust, mental model applicability, ratings of robots

Abstract

The transition in robotics from tools to teammates has begun. However, the benefit autonomous robots provide will be diminished if human teammates misinterpret robot behaviors. Applying mental model theory as the organizing framework for human understanding of robots, the current empirical study examined the influence of task-role mental models of robots on the interpretation of robot motion behaviors, and the resulting impact on subjective ratings of robots. Observers (N = 120) were exposed to robot behaviors that were either congruent or incongruent with their task-role mental model, by experimental manipulation of preparatory robot task-role information to influence mental models (i.e., security guard, groundskeeper, or no information), the robot’s actual task-role behaviors (i.e., security guard or groundskeeper), and the order in which these robot behaviors were presented. The results of the research supported the hypothesis that observers with congruent mental models were significantly more accurate in interpreting the motion behaviors of the robot than observers without a specific mental model. Additionally, an incongruent mental model, under certain circumstances, significantly hindered an observer’s interpretation accuracy, resulting in subjective sureness of inaccurate interpretations. The strength of the effects that mental models had on the interpretation and assessment of robot behaviors was thought to have been moderated by the ease with which a particular mental model could reasonably explain the robot’s behavior, termed mental model applicability. Finally, positive associations were found between differences in observers’ interpretation accuracy and differences in subjective ratings of robot intelligence, safety, and trustworthiness. The current research offers implications for the relationships between mental model components, as well as implications for designing robot behaviors to appear more transparent, or opaque, to humans.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2013

Semester

Fall

Advisor

Jentsch, Florian

Degree

Doctor of Philosophy (Ph.D.)

College

College of Sciences

Department

Graduate Studies

Degree Program

Modeling & Simulation

Format

application/pdf

Identifier

CFE0005391

URL

http://purl.fcla.edu/fcla/etd/CFE0005391

Language

English

Release Date

June 2015

Length of Campus-only Access

1 year

Access Status

Doctoral Dissertation (Open Access)

Subjects

Dissertations, Academic -- Sciences, Sciences -- Dissertations, Academic

Share

COinS