Mental Model Assessments: Is There Convergence Among Different Methods?
Knowledge elicitation and mental model assessment methods are becoming increasingly popular in applied psychology. However, there continue to be questions about the psychometrics of knowledge elicitation methods. Specifically, more needs to be known regarding the stability and consistency of the results over time (i.e., whether the methods are reliable) and regarding the degree to which the results correctly represent the underlying knowledge structures (i.e., whether the methods are valid). This paper focuses on the convergence among three different assessment methods: (a) pairwise relatedness ratings using Pathfinder, (b) concept mapping, and (c) card sorting. Thirty-six participants completed all three assessments using the same set of twenty driving-related terms. Assessment sequences were counterbalanced, and participants were randomly assigned to one of the six assessment sequences. It was found that the three assessment methods showed very low convergence as measured by the average correlation across the three methods within the same person. Indeed, convergence was lower than the sharedness across participants (as measured by the average correlation across participants within the same assessment method). Additionally, there were order effects among the different assessment sequences. Implications for research and practice are discussed.
Proceedings of the Human Factors and Ergonomics Society
Number of Pages
Article; Proceedings Paper
Source API URL
Evans, A. William; Jentsch, Florian; and Hitt, James M., "Mental Model Assessments: Is There Convergence Among Different Methods?" (2001). Scopus Export 2000s. 48.