Abstract

Would you trust a machine to make life-or-death decisions about your health and safety? Machines today are capable of achieving much more than they could 30 years ago—and the same will be said for machines that exist 30 years from now. The rise of intelligence in machines has resulted in humans entrusting them with ever-increasing responsibility. With this has arisen the question of whether machines should be given equal responsibility to humans—or if humans will ever perceive machines as being accountable for such responsibility. For example, if an intelligent machine accidentally harms a person, should it be blamed for its mistake? Should it be trusted to continue interacting with humans? Furthermore, how does the assignment of moral blame and trustworthiness toward machines compare to such assignment to humans who harm others? I answer these questions by exploring differences in moral blame and trustworthiness attributed to human and machine agents who make harmful moral mistakes. Additionally, I examine whether the knowledge and type of reason, as well as apology, for the harmful incident affects perceptions of the parties involved. In order to fill the gaps in understanding between topics in moral psychology, cognitive psychology, and artificial intelligence, valuable information from each of these fields have been combined to guide the research study being presented herein.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2017

Semester

Summer

Advisor

Hancock, Peter

Degree

Doctor of Philosophy (Ph.D.)

College

College of Sciences

Department

Psychology

Degree Program

Psychology; Human Factors Cognitive Psychology

Format

application/pdf

Identifier

CFE0007134

URL

http://purl.fcla.edu/fcla/etd/CFE0007134

Language

English

Release Date

2-15-2019

Length of Campus-only Access

1 year

Access Status

Doctoral Dissertation (Open Access)

Share

COinS