Abstract

Meta-analyses and systematic reviews of literature comparing the use of virtual patients (VPs) to traditional educational methods support the efficacy of VPs (Cook, Erwin, & Triola, 2010; Cook & Triola, 2009; McGaghie, Issenberg, Cohen, Barsuk, & Wayne, 2011). However, VP design research has produced a variety of design features (Bateman, Allen, Samani, Kidd, & Davies, 2013; Botezatu, Hult, & Fors, 2010a; Huwendiek & De Leng, 2010), frameworks (Huwendiek et al., 2009b) and principles (Huwendiek et al., 2009a) that are similar in nature, but appear to lack consensus. Consequently, researchers are not sure which VP design principles to apply and few validated guidelines are available. To address this situation, Huwendiek et al. (2014) validated an instrument to evaluate the design of VP simulations that focuses on fostering clinical reasoning. This dissertation examines the predictive validity of one instrument proposed by Huwendiek et al. (2014) that examines VP design features. Empirical research provides evidence for the reliability and validity of the VP design effectiveness measure. However, the relationship between the design features evaluated by the instrument to criterion-referenced measures of student learning and performance remains to be examined. This study examines the predictive validity of Huwendiek et al.'s (2014) VP design effectiveness measurement instrument by determining if the design factors evaluated by the instrument are correlated to medical students' performance in: (a) quizzes and VP cases embedded in Neurological Examination Rehearsal Virtual Environment (NERVE), and (b) NERVE-assisted virtual patient/standardized patient (VP/SP) differential diagnosis and SP checklists. It was hypothesized that students' perceptions of effectiveness of NERVE VP design are significantly correlated to the achievement of higher student learning and transfer outcomes in NERVE. The confirmatory factor analyses revealed the effectiveness of NERVE VP design was significantly correlated to student learning and transfer. Significant correlations were found between key design features evaluated by the instrument and students' performance on quizzes and VP cases embedded in NERVE. In addition, significant correlations were found between the NERVE VP design factors evaluated by Huwendiek et al.'s (2014) instrument and students' performance in SP checklists. Findings provided empirical evidence supporting the reliability and predictive validity of Huwendiek et al.'s (2014) instrument. Future research should examine additional sources of validity for Huwendiek et al.'s (2014) VP design effectiveness instrument using larger samples and from other socio-cultural backgrounds and continue to examine the predictive validity of Huwendiek et al.'s (2014) instrument at Level 2 (Learning) and Level 3 (Application) of Kirkpatrick's (1975) four-level model of training evaluation.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2016

Semester

Spring

Advisor

Hirumi, Atsusi

Degree

Doctor of Philosophy (Ph.D.)

College

College of Education and Human Performance

Degree Program

Education; Instructional Technology

Format

application/pdf

Identifier

CFE0006166

URL

http://purl.fcla.edu/fcla/etd/CFE0006166

Language

English

Release Date

May 2016

Length of Campus-only Access

None

Access Status

Doctoral Dissertation (Open Access)

Included in

Education Commons

Share

COinS