Title

Mental Model Assessments: Is There Convergence Among Different Methods?

Abstract

Knowledge elicitation and mental model assessment methods are becoming increasingly popular in applied psychology. However, there continue to be questions about the psychometrics of knowledge elicitation methods. Specifically, more needs to be known regarding the stability and consistency of the results over time (i.e., whether the methods are reliable) and regarding the degree to which the results correctly represent the underlying knowledge structures (i.e., whether the methods are valid). This paper focuses on the convergence among three different assessment methods: (a) pairwise relatedness ratings using Pathfinder, (b) concept mapping, and (c) card sorting. Thirty-six participants completed all three assessments using the same set of twenty driving-related terms. Assessment sequences were counterbalanced, and participants were randomly assigned to one of the six assessment sequences. It was found that the three assessment methods showed very low convergence as measured by the average correlation across the three methods within the same person. Indeed, convergence was lower than the sharedness across participants (as measured by the average correlation across participants within the same assessment method). Additionally, there were order effects among the different assessment sequences. Implications for research and practice are discussed.

Publication Date

12-1-2001

Publication Title

Proceedings of the Human Factors and Ergonomics Society

Number of Pages

293-296

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

Socpus ID

0442326540 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/0442326540

This document is currently not available here.

Share

COinS